00:00:00.000 Started by upstream project "autotest-per-patch" build number 132330 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 25766 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.103 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.104 The recommended git tool is: git 00:00:00.104 using credential 00000000-0000-0000-0000-000000000002 00:00:00.105 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.137 Fetching changes from the remote Git repository 00:00:00.139 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.173 Using shallow fetch with depth 1 00:00:00.173 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.173 > git --version # timeout=10 00:00:00.210 > git --version # 'git version 2.39.2' 00:00:00.210 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.230 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.230 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/84/24384/13 # timeout=5 00:00:08.796 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.807 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.818 Checking out Revision 6d4840695fb479ead742a39eb3a563a20cd15407 (FETCH_HEAD) 00:00:08.818 > git config core.sparsecheckout # timeout=10 00:00:08.828 > git read-tree -mu HEAD # timeout=10 00:00:08.842 > git checkout -f 6d4840695fb479ead742a39eb3a563a20cd15407 # timeout=5 00:00:08.863 Commit message: "jenkins/jjb-config: Commonize distro-based params" 00:00:08.863 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.976 [Pipeline] Start of Pipeline 00:00:08.994 [Pipeline] library 00:00:08.996 Loading library shm_lib@master 00:00:08.996 Library shm_lib@master is cached. Copying from home. 00:00:09.016 [Pipeline] node 00:00:09.027 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:09.029 [Pipeline] { 00:00:09.042 [Pipeline] catchError 00:00:09.044 [Pipeline] { 00:00:09.058 [Pipeline] wrap 00:00:09.066 [Pipeline] { 00:00:09.076 [Pipeline] stage 00:00:09.078 [Pipeline] { (Prologue) 00:00:09.280 [Pipeline] sh 00:00:09.562 + logger -p user.info -t JENKINS-CI 00:00:09.578 [Pipeline] echo 00:00:09.579 Node: WFP8 00:00:09.585 [Pipeline] sh 00:00:09.880 [Pipeline] setCustomBuildProperty 00:00:09.890 [Pipeline] echo 00:00:09.891 Cleanup processes 00:00:09.895 [Pipeline] sh 00:00:10.172 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.172 2566361 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.184 [Pipeline] sh 00:00:10.466 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.466 ++ grep -v 'sudo pgrep' 00:00:10.466 ++ awk '{print $1}' 00:00:10.466 + sudo kill -9 00:00:10.466 + true 00:00:10.482 [Pipeline] cleanWs 00:00:10.492 [WS-CLEANUP] Deleting project workspace... 00:00:10.492 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.498 [WS-CLEANUP] done 00:00:10.503 [Pipeline] setCustomBuildProperty 00:00:10.518 [Pipeline] sh 00:00:10.801 + sudo git config --global --replace-all safe.directory '*' 00:00:10.902 [Pipeline] httpRequest 00:00:11.386 [Pipeline] echo 00:00:11.388 Sorcerer 10.211.164.20 is alive 00:00:11.399 [Pipeline] retry 00:00:11.401 [Pipeline] { 00:00:11.416 [Pipeline] httpRequest 00:00:11.420 HttpMethod: GET 00:00:11.421 URL: http://10.211.164.20/packages/jbp_6d4840695fb479ead742a39eb3a563a20cd15407.tar.gz 00:00:11.421 Sending request to url: http://10.211.164.20/packages/jbp_6d4840695fb479ead742a39eb3a563a20cd15407.tar.gz 00:00:11.439 Response Code: HTTP/1.1 200 OK 00:00:11.440 Success: Status code 200 is in the accepted range: 200,404 00:00:11.440 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_6d4840695fb479ead742a39eb3a563a20cd15407.tar.gz 00:00:39.514 [Pipeline] } 00:00:39.537 [Pipeline] // retry 00:00:39.546 [Pipeline] sh 00:00:39.833 + tar --no-same-owner -xf jbp_6d4840695fb479ead742a39eb3a563a20cd15407.tar.gz 00:00:39.851 [Pipeline] httpRequest 00:00:40.250 [Pipeline] echo 00:00:40.252 Sorcerer 10.211.164.20 is alive 00:00:40.262 [Pipeline] retry 00:00:40.265 [Pipeline] { 00:00:40.280 [Pipeline] httpRequest 00:00:40.285 HttpMethod: GET 00:00:40.285 URL: http://10.211.164.20/packages/spdk_dcc2ca8f30ea717d7f66cc9c92d44faa802d2c19.tar.gz 00:00:40.286 Sending request to url: http://10.211.164.20/packages/spdk_dcc2ca8f30ea717d7f66cc9c92d44faa802d2c19.tar.gz 00:00:40.298 Response Code: HTTP/1.1 200 OK 00:00:40.299 Success: Status code 200 is in the accepted range: 200,404 00:00:40.299 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_dcc2ca8f30ea717d7f66cc9c92d44faa802d2c19.tar.gz 00:01:29.266 [Pipeline] } 00:01:29.283 [Pipeline] // retry 00:01:29.291 [Pipeline] sh 00:01:29.578 + tar --no-same-owner -xf spdk_dcc2ca8f30ea717d7f66cc9c92d44faa802d2c19.tar.gz 00:01:32.880 [Pipeline] sh 00:01:33.165 + git -C spdk log --oneline -n5 00:01:33.165 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:01:33.165 73f18e890 lib/reduce: fix the magic number of empty mapping detection. 00:01:33.165 029355612 bdev_ut: add manual examine bdev unit test case 00:01:33.165 fc96810c2 bdev: remove bdev from examine allow list on unregister 00:01:33.165 a0c128549 bdev/nvme: Make bdev nvme get and set opts APIs public 00:01:33.176 [Pipeline] } 00:01:33.190 [Pipeline] // stage 00:01:33.200 [Pipeline] stage 00:01:33.202 [Pipeline] { (Prepare) 00:01:33.219 [Pipeline] writeFile 00:01:33.235 [Pipeline] sh 00:01:33.519 + logger -p user.info -t JENKINS-CI 00:01:33.532 [Pipeline] sh 00:01:33.816 + logger -p user.info -t JENKINS-CI 00:01:33.828 [Pipeline] sh 00:01:34.112 + cat autorun-spdk.conf 00:01:34.112 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.112 SPDK_TEST_NVMF=1 00:01:34.112 SPDK_TEST_NVME_CLI=1 00:01:34.112 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:34.112 SPDK_TEST_NVMF_NICS=e810 00:01:34.112 SPDK_TEST_VFIOUSER=1 00:01:34.112 SPDK_RUN_UBSAN=1 00:01:34.112 NET_TYPE=phy 00:01:34.119 RUN_NIGHTLY=0 00:01:34.124 [Pipeline] readFile 00:01:34.147 [Pipeline] withEnv 00:01:34.149 [Pipeline] { 00:01:34.160 [Pipeline] sh 00:01:34.444 + set -ex 00:01:34.444 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:34.444 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:34.444 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.444 ++ SPDK_TEST_NVMF=1 00:01:34.444 ++ SPDK_TEST_NVME_CLI=1 00:01:34.444 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:34.444 ++ SPDK_TEST_NVMF_NICS=e810 00:01:34.444 ++ SPDK_TEST_VFIOUSER=1 00:01:34.444 ++ SPDK_RUN_UBSAN=1 00:01:34.444 ++ NET_TYPE=phy 00:01:34.444 ++ RUN_NIGHTLY=0 00:01:34.444 + case $SPDK_TEST_NVMF_NICS in 00:01:34.444 + DRIVERS=ice 00:01:34.444 + [[ tcp == \r\d\m\a ]] 00:01:34.444 + [[ -n ice ]] 00:01:34.444 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:34.444 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:34.444 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:34.444 rmmod: ERROR: Module irdma is not currently loaded 00:01:34.444 rmmod: ERROR: Module i40iw is not currently loaded 00:01:34.444 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:34.444 + true 00:01:34.444 + for D in $DRIVERS 00:01:34.444 + sudo modprobe ice 00:01:34.444 + exit 0 00:01:34.453 [Pipeline] } 00:01:34.468 [Pipeline] // withEnv 00:01:34.473 [Pipeline] } 00:01:34.487 [Pipeline] // stage 00:01:34.497 [Pipeline] catchError 00:01:34.498 [Pipeline] { 00:01:34.512 [Pipeline] timeout 00:01:34.512 Timeout set to expire in 1 hr 0 min 00:01:34.514 [Pipeline] { 00:01:34.528 [Pipeline] stage 00:01:34.530 [Pipeline] { (Tests) 00:01:34.545 [Pipeline] sh 00:01:34.830 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:34.830 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:34.830 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:34.830 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:34.830 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:34.830 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:34.830 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:34.830 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:34.830 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:34.830 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:34.830 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:34.830 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:34.830 + source /etc/os-release 00:01:34.830 ++ NAME='Fedora Linux' 00:01:34.830 ++ VERSION='39 (Cloud Edition)' 00:01:34.830 ++ ID=fedora 00:01:34.830 ++ VERSION_ID=39 00:01:34.830 ++ VERSION_CODENAME= 00:01:34.830 ++ PLATFORM_ID=platform:f39 00:01:34.830 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:34.830 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:34.830 ++ LOGO=fedora-logo-icon 00:01:34.830 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:34.830 ++ HOME_URL=https://fedoraproject.org/ 00:01:34.830 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:34.830 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:34.830 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:34.830 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:34.830 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:34.830 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:34.830 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:34.830 ++ SUPPORT_END=2024-11-12 00:01:34.830 ++ VARIANT='Cloud Edition' 00:01:34.830 ++ VARIANT_ID=cloud 00:01:34.830 + uname -a 00:01:34.830 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:34.830 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:37.367 Hugepages 00:01:37.367 node hugesize free / total 00:01:37.367 node0 1048576kB 0 / 0 00:01:37.367 node0 2048kB 0 / 0 00:01:37.367 node1 1048576kB 0 / 0 00:01:37.367 node1 2048kB 0 / 0 00:01:37.367 00:01:37.367 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:37.367 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:37.367 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:37.367 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:37.367 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:37.367 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:37.367 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:37.367 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:37.367 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:37.367 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:37.367 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:37.367 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:37.367 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:37.367 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:37.367 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:37.367 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:37.367 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:37.367 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:37.367 + rm -f /tmp/spdk-ld-path 00:01:37.367 + source autorun-spdk.conf 00:01:37.367 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.367 ++ SPDK_TEST_NVMF=1 00:01:37.367 ++ SPDK_TEST_NVME_CLI=1 00:01:37.367 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.367 ++ SPDK_TEST_NVMF_NICS=e810 00:01:37.367 ++ SPDK_TEST_VFIOUSER=1 00:01:37.367 ++ SPDK_RUN_UBSAN=1 00:01:37.367 ++ NET_TYPE=phy 00:01:37.367 ++ RUN_NIGHTLY=0 00:01:37.367 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:37.367 + [[ -n '' ]] 00:01:37.367 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:37.367 + for M in /var/spdk/build-*-manifest.txt 00:01:37.367 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:37.367 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:37.367 + for M in /var/spdk/build-*-manifest.txt 00:01:37.367 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:37.367 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:37.367 + for M in /var/spdk/build-*-manifest.txt 00:01:37.367 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:37.367 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:37.626 ++ uname 00:01:37.626 + [[ Linux == \L\i\n\u\x ]] 00:01:37.626 + sudo dmesg -T 00:01:37.626 + sudo dmesg --clear 00:01:37.626 + dmesg_pid=2567283 00:01:37.626 + [[ Fedora Linux == FreeBSD ]] 00:01:37.626 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:37.626 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:37.626 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:37.626 + [[ -x /usr/src/fio-static/fio ]] 00:01:37.626 + export FIO_BIN=/usr/src/fio-static/fio 00:01:37.626 + FIO_BIN=/usr/src/fio-static/fio 00:01:37.626 + sudo dmesg -Tw 00:01:37.626 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:37.626 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:37.626 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:37.626 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:37.626 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:37.626 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:37.626 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:37.626 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:37.626 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:37.626 12:53:40 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:37.626 12:53:40 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:37.626 12:53:40 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.626 12:53:40 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:37.626 12:53:40 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:37.626 12:53:40 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.626 12:53:40 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:37.626 12:53:40 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:37.626 12:53:40 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:37.626 12:53:40 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:37.626 12:53:40 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:37.626 12:53:40 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:37.626 12:53:40 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:37.626 12:53:40 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:37.626 12:53:40 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:37.626 12:53:40 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:37.626 12:53:40 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:37.626 12:53:40 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:37.626 12:53:40 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:37.626 12:53:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.626 12:53:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.626 12:53:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.626 12:53:40 -- paths/export.sh@5 -- $ export PATH 00:01:37.626 12:53:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.626 12:53:40 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:37.626 12:53:40 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:37.626 12:53:40 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732017220.XXXXXX 00:01:37.626 12:53:40 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732017220.TVEB6K 00:01:37.626 12:53:40 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:37.626 12:53:40 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:37.626 12:53:40 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:37.626 12:53:40 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:37.626 12:53:40 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:37.626 12:53:40 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:37.626 12:53:40 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:37.626 12:53:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.626 12:53:40 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:37.626 12:53:40 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:37.626 12:53:40 -- pm/common@17 -- $ local monitor 00:01:37.626 12:53:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.626 12:53:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.626 12:53:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.626 12:53:40 -- pm/common@21 -- $ date +%s 00:01:37.626 12:53:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.626 12:53:40 -- pm/common@21 -- $ date +%s 00:01:37.626 12:53:40 -- pm/common@25 -- $ sleep 1 00:01:37.626 12:53:40 -- pm/common@21 -- $ date +%s 00:01:37.626 12:53:41 -- pm/common@21 -- $ date +%s 00:01:37.885 12:53:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732017221 00:01:37.885 12:53:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732017221 00:01:37.885 12:53:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732017221 00:01:37.885 12:53:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732017221 00:01:37.885 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732017221_collect-cpu-load.pm.log 00:01:37.885 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732017221_collect-vmstat.pm.log 00:01:37.885 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732017221_collect-cpu-temp.pm.log 00:01:37.885 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732017221_collect-bmc-pm.bmc.pm.log 00:01:38.823 12:53:42 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:38.823 12:53:42 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:38.823 12:53:42 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:38.823 12:53:42 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:38.823 12:53:42 -- spdk/autobuild.sh@16 -- $ date -u 00:01:38.823 Tue Nov 19 11:53:42 AM UTC 2024 00:01:38.823 12:53:42 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:38.823 v25.01-pre-197-gdcc2ca8f3 00:01:38.823 12:53:42 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:38.823 12:53:42 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:38.823 12:53:42 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:38.823 12:53:42 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:38.823 12:53:42 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:38.823 12:53:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.823 ************************************ 00:01:38.823 START TEST ubsan 00:01:38.823 ************************************ 00:01:38.823 12:53:42 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:38.823 using ubsan 00:01:38.823 00:01:38.823 real 0m0.000s 00:01:38.823 user 0m0.000s 00:01:38.823 sys 0m0.000s 00:01:38.823 12:53:42 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:38.823 12:53:42 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:38.823 ************************************ 00:01:38.823 END TEST ubsan 00:01:38.823 ************************************ 00:01:38.823 12:53:42 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:38.823 12:53:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:38.823 12:53:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:38.823 12:53:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:38.823 12:53:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:38.823 12:53:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:38.823 12:53:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:38.823 12:53:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:38.823 12:53:42 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:39.082 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:39.082 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:39.340 Using 'verbs' RDMA provider 00:01:52.489 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:04.697 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:04.697 Creating mk/config.mk...done. 00:02:04.697 Creating mk/cc.flags.mk...done. 00:02:04.697 Type 'make' to build. 00:02:04.697 12:54:07 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:04.697 12:54:07 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:04.697 12:54:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:04.697 12:54:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.697 ************************************ 00:02:04.697 START TEST make 00:02:04.697 ************************************ 00:02:04.697 12:54:07 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:04.955 make[1]: Nothing to be done for 'all'. 00:02:06.334 The Meson build system 00:02:06.334 Version: 1.5.0 00:02:06.334 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:06.334 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:06.334 Build type: native build 00:02:06.334 Project name: libvfio-user 00:02:06.334 Project version: 0.0.1 00:02:06.334 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:06.334 C linker for the host machine: cc ld.bfd 2.40-14 00:02:06.334 Host machine cpu family: x86_64 00:02:06.334 Host machine cpu: x86_64 00:02:06.334 Run-time dependency threads found: YES 00:02:06.334 Library dl found: YES 00:02:06.334 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:06.334 Run-time dependency json-c found: YES 0.17 00:02:06.334 Run-time dependency cmocka found: YES 1.1.7 00:02:06.334 Program pytest-3 found: NO 00:02:06.334 Program flake8 found: NO 00:02:06.334 Program misspell-fixer found: NO 00:02:06.334 Program restructuredtext-lint found: NO 00:02:06.334 Program valgrind found: YES (/usr/bin/valgrind) 00:02:06.334 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:06.334 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:06.334 Compiler for C supports arguments -Wwrite-strings: YES 00:02:06.334 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:06.334 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:06.334 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:06.334 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:06.334 Build targets in project: 8 00:02:06.334 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:06.334 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:06.334 00:02:06.334 libvfio-user 0.0.1 00:02:06.334 00:02:06.334 User defined options 00:02:06.334 buildtype : debug 00:02:06.334 default_library: shared 00:02:06.334 libdir : /usr/local/lib 00:02:06.334 00:02:06.334 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:06.592 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:06.850 [1/37] Compiling C object samples/null.p/null.c.o 00:02:06.850 [2/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:06.850 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:06.850 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:06.850 [5/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:06.850 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:06.850 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:06.850 [8/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:06.850 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:06.850 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:06.850 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:06.850 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:06.850 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:06.850 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:06.850 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:06.850 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:06.850 [17/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:06.850 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:06.850 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:06.850 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:06.850 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:06.850 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:06.850 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:06.850 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:06.850 [25/37] Compiling C object samples/server.p/server.c.o 00:02:06.850 [26/37] Compiling C object samples/client.p/client.c.o 00:02:06.850 [27/37] Linking target samples/client 00:02:07.109 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:07.109 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:07.109 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:07.109 [31/37] Linking target test/unit_tests 00:02:07.109 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:07.109 [33/37] Linking target samples/shadow_ioeventfd_server 00:02:07.109 [34/37] Linking target samples/lspci 00:02:07.109 [35/37] Linking target samples/server 00:02:07.109 [36/37] Linking target samples/null 00:02:07.109 [37/37] Linking target samples/gpio-pci-idio-16 00:02:07.109 INFO: autodetecting backend as ninja 00:02:07.109 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:07.368 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:07.627 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:07.627 ninja: no work to do. 00:02:12.903 The Meson build system 00:02:12.903 Version: 1.5.0 00:02:12.903 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:12.903 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:12.903 Build type: native build 00:02:12.903 Program cat found: YES (/usr/bin/cat) 00:02:12.903 Project name: DPDK 00:02:12.903 Project version: 24.03.0 00:02:12.903 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:12.903 C linker for the host machine: cc ld.bfd 2.40-14 00:02:12.903 Host machine cpu family: x86_64 00:02:12.903 Host machine cpu: x86_64 00:02:12.903 Message: ## Building in Developer Mode ## 00:02:12.903 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:12.903 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:12.903 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:12.903 Program python3 found: YES (/usr/bin/python3) 00:02:12.903 Program cat found: YES (/usr/bin/cat) 00:02:12.903 Compiler for C supports arguments -march=native: YES 00:02:12.903 Checking for size of "void *" : 8 00:02:12.903 Checking for size of "void *" : 8 (cached) 00:02:12.903 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:12.903 Library m found: YES 00:02:12.903 Library numa found: YES 00:02:12.903 Has header "numaif.h" : YES 00:02:12.903 Library fdt found: NO 00:02:12.903 Library execinfo found: NO 00:02:12.903 Has header "execinfo.h" : YES 00:02:12.903 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:12.903 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:12.903 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:12.903 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:12.903 Run-time dependency openssl found: YES 3.1.1 00:02:12.903 Run-time dependency libpcap found: YES 1.10.4 00:02:12.903 Has header "pcap.h" with dependency libpcap: YES 00:02:12.903 Compiler for C supports arguments -Wcast-qual: YES 00:02:12.903 Compiler for C supports arguments -Wdeprecated: YES 00:02:12.903 Compiler for C supports arguments -Wformat: YES 00:02:12.903 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:12.903 Compiler for C supports arguments -Wformat-security: NO 00:02:12.903 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:12.903 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:12.903 Compiler for C supports arguments -Wnested-externs: YES 00:02:12.903 Compiler for C supports arguments -Wold-style-definition: YES 00:02:12.903 Compiler for C supports arguments -Wpointer-arith: YES 00:02:12.903 Compiler for C supports arguments -Wsign-compare: YES 00:02:12.903 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:12.903 Compiler for C supports arguments -Wundef: YES 00:02:12.903 Compiler for C supports arguments -Wwrite-strings: YES 00:02:12.903 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:12.903 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:12.903 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:12.903 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:12.903 Program objdump found: YES (/usr/bin/objdump) 00:02:12.903 Compiler for C supports arguments -mavx512f: YES 00:02:12.903 Checking if "AVX512 checking" compiles: YES 00:02:12.903 Fetching value of define "__SSE4_2__" : 1 00:02:12.903 Fetching value of define "__AES__" : 1 00:02:12.904 Fetching value of define "__AVX__" : 1 00:02:12.904 Fetching value of define "__AVX2__" : 1 00:02:12.904 Fetching value of define "__AVX512BW__" : 1 00:02:12.904 Fetching value of define "__AVX512CD__" : 1 00:02:12.904 Fetching value of define "__AVX512DQ__" : 1 00:02:12.904 Fetching value of define "__AVX512F__" : 1 00:02:12.904 Fetching value of define "__AVX512VL__" : 1 00:02:12.904 Fetching value of define "__PCLMUL__" : 1 00:02:12.904 Fetching value of define "__RDRND__" : 1 00:02:12.904 Fetching value of define "__RDSEED__" : 1 00:02:12.904 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:12.904 Fetching value of define "__znver1__" : (undefined) 00:02:12.904 Fetching value of define "__znver2__" : (undefined) 00:02:12.904 Fetching value of define "__znver3__" : (undefined) 00:02:12.904 Fetching value of define "__znver4__" : (undefined) 00:02:12.904 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:12.904 Message: lib/log: Defining dependency "log" 00:02:12.904 Message: lib/kvargs: Defining dependency "kvargs" 00:02:12.904 Message: lib/telemetry: Defining dependency "telemetry" 00:02:12.904 Checking for function "getentropy" : NO 00:02:12.904 Message: lib/eal: Defining dependency "eal" 00:02:12.904 Message: lib/ring: Defining dependency "ring" 00:02:12.904 Message: lib/rcu: Defining dependency "rcu" 00:02:12.904 Message: lib/mempool: Defining dependency "mempool" 00:02:12.904 Message: lib/mbuf: Defining dependency "mbuf" 00:02:12.904 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:12.904 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:12.904 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:12.904 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:12.904 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:12.904 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:12.904 Compiler for C supports arguments -mpclmul: YES 00:02:12.904 Compiler for C supports arguments -maes: YES 00:02:12.904 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:12.904 Compiler for C supports arguments -mavx512bw: YES 00:02:12.904 Compiler for C supports arguments -mavx512dq: YES 00:02:12.904 Compiler for C supports arguments -mavx512vl: YES 00:02:12.904 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:12.904 Compiler for C supports arguments -mavx2: YES 00:02:12.904 Compiler for C supports arguments -mavx: YES 00:02:12.904 Message: lib/net: Defining dependency "net" 00:02:12.904 Message: lib/meter: Defining dependency "meter" 00:02:12.904 Message: lib/ethdev: Defining dependency "ethdev" 00:02:12.904 Message: lib/pci: Defining dependency "pci" 00:02:12.904 Message: lib/cmdline: Defining dependency "cmdline" 00:02:12.904 Message: lib/hash: Defining dependency "hash" 00:02:12.904 Message: lib/timer: Defining dependency "timer" 00:02:12.904 Message: lib/compressdev: Defining dependency "compressdev" 00:02:12.904 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:12.904 Message: lib/dmadev: Defining dependency "dmadev" 00:02:12.904 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:12.904 Message: lib/power: Defining dependency "power" 00:02:12.904 Message: lib/reorder: Defining dependency "reorder" 00:02:12.904 Message: lib/security: Defining dependency "security" 00:02:12.904 Has header "linux/userfaultfd.h" : YES 00:02:12.904 Has header "linux/vduse.h" : YES 00:02:12.904 Message: lib/vhost: Defining dependency "vhost" 00:02:12.904 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:12.904 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:12.904 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:12.904 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:12.904 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:12.904 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:12.904 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:12.904 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:12.904 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:12.904 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:12.904 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:12.904 Configuring doxy-api-html.conf using configuration 00:02:12.904 Configuring doxy-api-man.conf using configuration 00:02:12.904 Program mandb found: YES (/usr/bin/mandb) 00:02:12.904 Program sphinx-build found: NO 00:02:12.904 Configuring rte_build_config.h using configuration 00:02:12.904 Message: 00:02:12.904 ================= 00:02:12.904 Applications Enabled 00:02:12.904 ================= 00:02:12.904 00:02:12.904 apps: 00:02:12.904 00:02:12.904 00:02:12.904 Message: 00:02:12.904 ================= 00:02:12.904 Libraries Enabled 00:02:12.904 ================= 00:02:12.904 00:02:12.904 libs: 00:02:12.904 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:12.904 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:12.904 cryptodev, dmadev, power, reorder, security, vhost, 00:02:12.904 00:02:12.904 Message: 00:02:12.904 =============== 00:02:12.904 Drivers Enabled 00:02:12.904 =============== 00:02:12.904 00:02:12.904 common: 00:02:12.904 00:02:12.904 bus: 00:02:12.904 pci, vdev, 00:02:12.904 mempool: 00:02:12.904 ring, 00:02:12.904 dma: 00:02:12.904 00:02:12.904 net: 00:02:12.904 00:02:12.904 crypto: 00:02:12.904 00:02:12.904 compress: 00:02:12.904 00:02:12.904 vdpa: 00:02:12.904 00:02:12.904 00:02:12.904 Message: 00:02:12.904 ================= 00:02:12.904 Content Skipped 00:02:12.904 ================= 00:02:12.904 00:02:12.904 apps: 00:02:12.904 dumpcap: explicitly disabled via build config 00:02:12.904 graph: explicitly disabled via build config 00:02:12.904 pdump: explicitly disabled via build config 00:02:12.904 proc-info: explicitly disabled via build config 00:02:12.904 test-acl: explicitly disabled via build config 00:02:12.904 test-bbdev: explicitly disabled via build config 00:02:12.904 test-cmdline: explicitly disabled via build config 00:02:12.904 test-compress-perf: explicitly disabled via build config 00:02:12.904 test-crypto-perf: explicitly disabled via build config 00:02:12.904 test-dma-perf: explicitly disabled via build config 00:02:12.904 test-eventdev: explicitly disabled via build config 00:02:12.904 test-fib: explicitly disabled via build config 00:02:12.904 test-flow-perf: explicitly disabled via build config 00:02:12.904 test-gpudev: explicitly disabled via build config 00:02:12.904 test-mldev: explicitly disabled via build config 00:02:12.904 test-pipeline: explicitly disabled via build config 00:02:12.904 test-pmd: explicitly disabled via build config 00:02:12.904 test-regex: explicitly disabled via build config 00:02:12.904 test-sad: explicitly disabled via build config 00:02:12.904 test-security-perf: explicitly disabled via build config 00:02:12.904 00:02:12.904 libs: 00:02:12.904 argparse: explicitly disabled via build config 00:02:12.904 metrics: explicitly disabled via build config 00:02:12.904 acl: explicitly disabled via build config 00:02:12.904 bbdev: explicitly disabled via build config 00:02:12.904 bitratestats: explicitly disabled via build config 00:02:12.904 bpf: explicitly disabled via build config 00:02:12.904 cfgfile: explicitly disabled via build config 00:02:12.904 distributor: explicitly disabled via build config 00:02:12.905 efd: explicitly disabled via build config 00:02:12.905 eventdev: explicitly disabled via build config 00:02:12.905 dispatcher: explicitly disabled via build config 00:02:12.905 gpudev: explicitly disabled via build config 00:02:12.905 gro: explicitly disabled via build config 00:02:12.905 gso: explicitly disabled via build config 00:02:12.905 ip_frag: explicitly disabled via build config 00:02:12.905 jobstats: explicitly disabled via build config 00:02:12.905 latencystats: explicitly disabled via build config 00:02:12.905 lpm: explicitly disabled via build config 00:02:12.905 member: explicitly disabled via build config 00:02:12.905 pcapng: explicitly disabled via build config 00:02:12.905 rawdev: explicitly disabled via build config 00:02:12.905 regexdev: explicitly disabled via build config 00:02:12.905 mldev: explicitly disabled via build config 00:02:12.905 rib: explicitly disabled via build config 00:02:12.905 sched: explicitly disabled via build config 00:02:12.905 stack: explicitly disabled via build config 00:02:12.905 ipsec: explicitly disabled via build config 00:02:12.905 pdcp: explicitly disabled via build config 00:02:12.905 fib: explicitly disabled via build config 00:02:12.905 port: explicitly disabled via build config 00:02:12.905 pdump: explicitly disabled via build config 00:02:12.905 table: explicitly disabled via build config 00:02:12.905 pipeline: explicitly disabled via build config 00:02:12.905 graph: explicitly disabled via build config 00:02:12.905 node: explicitly disabled via build config 00:02:12.905 00:02:12.905 drivers: 00:02:12.905 common/cpt: not in enabled drivers build config 00:02:12.905 common/dpaax: not in enabled drivers build config 00:02:12.905 common/iavf: not in enabled drivers build config 00:02:12.905 common/idpf: not in enabled drivers build config 00:02:12.905 common/ionic: not in enabled drivers build config 00:02:12.905 common/mvep: not in enabled drivers build config 00:02:12.905 common/octeontx: not in enabled drivers build config 00:02:12.905 bus/auxiliary: not in enabled drivers build config 00:02:12.905 bus/cdx: not in enabled drivers build config 00:02:12.905 bus/dpaa: not in enabled drivers build config 00:02:12.905 bus/fslmc: not in enabled drivers build config 00:02:12.905 bus/ifpga: not in enabled drivers build config 00:02:12.905 bus/platform: not in enabled drivers build config 00:02:12.905 bus/uacce: not in enabled drivers build config 00:02:12.905 bus/vmbus: not in enabled drivers build config 00:02:12.905 common/cnxk: not in enabled drivers build config 00:02:12.905 common/mlx5: not in enabled drivers build config 00:02:12.905 common/nfp: not in enabled drivers build config 00:02:12.905 common/nitrox: not in enabled drivers build config 00:02:12.905 common/qat: not in enabled drivers build config 00:02:12.905 common/sfc_efx: not in enabled drivers build config 00:02:12.905 mempool/bucket: not in enabled drivers build config 00:02:12.905 mempool/cnxk: not in enabled drivers build config 00:02:12.905 mempool/dpaa: not in enabled drivers build config 00:02:12.905 mempool/dpaa2: not in enabled drivers build config 00:02:12.905 mempool/octeontx: not in enabled drivers build config 00:02:12.905 mempool/stack: not in enabled drivers build config 00:02:12.905 dma/cnxk: not in enabled drivers build config 00:02:12.905 dma/dpaa: not in enabled drivers build config 00:02:12.905 dma/dpaa2: not in enabled drivers build config 00:02:12.905 dma/hisilicon: not in enabled drivers build config 00:02:12.905 dma/idxd: not in enabled drivers build config 00:02:12.905 dma/ioat: not in enabled drivers build config 00:02:12.905 dma/skeleton: not in enabled drivers build config 00:02:12.905 net/af_packet: not in enabled drivers build config 00:02:12.905 net/af_xdp: not in enabled drivers build config 00:02:12.905 net/ark: not in enabled drivers build config 00:02:12.905 net/atlantic: not in enabled drivers build config 00:02:12.905 net/avp: not in enabled drivers build config 00:02:12.905 net/axgbe: not in enabled drivers build config 00:02:12.905 net/bnx2x: not in enabled drivers build config 00:02:12.905 net/bnxt: not in enabled drivers build config 00:02:12.905 net/bonding: not in enabled drivers build config 00:02:12.905 net/cnxk: not in enabled drivers build config 00:02:12.905 net/cpfl: not in enabled drivers build config 00:02:12.905 net/cxgbe: not in enabled drivers build config 00:02:12.905 net/dpaa: not in enabled drivers build config 00:02:12.905 net/dpaa2: not in enabled drivers build config 00:02:12.905 net/e1000: not in enabled drivers build config 00:02:12.905 net/ena: not in enabled drivers build config 00:02:12.905 net/enetc: not in enabled drivers build config 00:02:12.905 net/enetfec: not in enabled drivers build config 00:02:12.905 net/enic: not in enabled drivers build config 00:02:12.905 net/failsafe: not in enabled drivers build config 00:02:12.905 net/fm10k: not in enabled drivers build config 00:02:12.905 net/gve: not in enabled drivers build config 00:02:12.905 net/hinic: not in enabled drivers build config 00:02:12.905 net/hns3: not in enabled drivers build config 00:02:12.905 net/i40e: not in enabled drivers build config 00:02:12.905 net/iavf: not in enabled drivers build config 00:02:12.905 net/ice: not in enabled drivers build config 00:02:12.905 net/idpf: not in enabled drivers build config 00:02:12.905 net/igc: not in enabled drivers build config 00:02:12.905 net/ionic: not in enabled drivers build config 00:02:12.905 net/ipn3ke: not in enabled drivers build config 00:02:12.905 net/ixgbe: not in enabled drivers build config 00:02:12.905 net/mana: not in enabled drivers build config 00:02:12.905 net/memif: not in enabled drivers build config 00:02:12.905 net/mlx4: not in enabled drivers build config 00:02:12.905 net/mlx5: not in enabled drivers build config 00:02:12.905 net/mvneta: not in enabled drivers build config 00:02:12.905 net/mvpp2: not in enabled drivers build config 00:02:12.905 net/netvsc: not in enabled drivers build config 00:02:12.905 net/nfb: not in enabled drivers build config 00:02:12.905 net/nfp: not in enabled drivers build config 00:02:12.905 net/ngbe: not in enabled drivers build config 00:02:12.905 net/null: not in enabled drivers build config 00:02:12.905 net/octeontx: not in enabled drivers build config 00:02:12.905 net/octeon_ep: not in enabled drivers build config 00:02:12.905 net/pcap: not in enabled drivers build config 00:02:12.905 net/pfe: not in enabled drivers build config 00:02:12.905 net/qede: not in enabled drivers build config 00:02:12.905 net/ring: not in enabled drivers build config 00:02:12.905 net/sfc: not in enabled drivers build config 00:02:12.905 net/softnic: not in enabled drivers build config 00:02:12.905 net/tap: not in enabled drivers build config 00:02:12.905 net/thunderx: not in enabled drivers build config 00:02:12.905 net/txgbe: not in enabled drivers build config 00:02:12.905 net/vdev_netvsc: not in enabled drivers build config 00:02:12.905 net/vhost: not in enabled drivers build config 00:02:12.905 net/virtio: not in enabled drivers build config 00:02:12.905 net/vmxnet3: not in enabled drivers build config 00:02:12.905 raw/*: missing internal dependency, "rawdev" 00:02:12.905 crypto/armv8: not in enabled drivers build config 00:02:12.905 crypto/bcmfs: not in enabled drivers build config 00:02:12.905 crypto/caam_jr: not in enabled drivers build config 00:02:12.905 crypto/ccp: not in enabled drivers build config 00:02:12.905 crypto/cnxk: not in enabled drivers build config 00:02:12.905 crypto/dpaa_sec: not in enabled drivers build config 00:02:12.905 crypto/dpaa2_sec: not in enabled drivers build config 00:02:12.905 crypto/ipsec_mb: not in enabled drivers build config 00:02:12.905 crypto/mlx5: not in enabled drivers build config 00:02:12.905 crypto/mvsam: not in enabled drivers build config 00:02:12.905 crypto/nitrox: not in enabled drivers build config 00:02:12.905 crypto/null: not in enabled drivers build config 00:02:12.905 crypto/octeontx: not in enabled drivers build config 00:02:12.905 crypto/openssl: not in enabled drivers build config 00:02:12.905 crypto/scheduler: not in enabled drivers build config 00:02:12.905 crypto/uadk: not in enabled drivers build config 00:02:12.905 crypto/virtio: not in enabled drivers build config 00:02:12.905 compress/isal: not in enabled drivers build config 00:02:12.905 compress/mlx5: not in enabled drivers build config 00:02:12.905 compress/nitrox: not in enabled drivers build config 00:02:12.905 compress/octeontx: not in enabled drivers build config 00:02:12.906 compress/zlib: not in enabled drivers build config 00:02:12.906 regex/*: missing internal dependency, "regexdev" 00:02:12.906 ml/*: missing internal dependency, "mldev" 00:02:12.906 vdpa/ifc: not in enabled drivers build config 00:02:12.906 vdpa/mlx5: not in enabled drivers build config 00:02:12.906 vdpa/nfp: not in enabled drivers build config 00:02:12.906 vdpa/sfc: not in enabled drivers build config 00:02:12.906 event/*: missing internal dependency, "eventdev" 00:02:12.906 baseband/*: missing internal dependency, "bbdev" 00:02:12.906 gpu/*: missing internal dependency, "gpudev" 00:02:12.906 00:02:12.906 00:02:12.906 Build targets in project: 85 00:02:12.906 00:02:12.906 DPDK 24.03.0 00:02:12.906 00:02:12.906 User defined options 00:02:12.906 buildtype : debug 00:02:12.906 default_library : shared 00:02:12.906 libdir : lib 00:02:12.906 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:12.906 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:12.906 c_link_args : 00:02:12.906 cpu_instruction_set: native 00:02:12.906 disable_apps : test-cmdline,dumpcap,test-dma-perf,test-bbdev,test,test-flow-perf,test-security-perf,test-compress-perf,test-fib,test-regex,test-acl,test-crypto-perf,test-mldev,proc-info,graph,test-sad,test-pipeline,test-pmd,pdump,test-eventdev,test-gpudev 00:02:12.906 disable_libs : rawdev,pipeline,argparse,node,gpudev,jobstats,port,pcapng,ip_frag,pdcp,table,lpm,efd,gso,stack,eventdev,bpf,dispatcher,mldev,fib,ipsec,acl,graph,metrics,regexdev,distributor,latencystats,bbdev,cfgfile,member,sched,gro,rib,bitratestats,pdump 00:02:12.906 enable_docs : false 00:02:12.906 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:12.906 enable_kmods : false 00:02:12.906 max_lcores : 128 00:02:12.906 tests : false 00:02:12.906 00:02:12.906 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:13.481 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:13.481 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:13.481 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:13.481 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:13.481 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:13.481 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:13.481 [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:13.481 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:13.481 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:13.481 [9/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:13.481 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:13.481 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:13.481 [12/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:13.481 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:13.740 [14/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:13.740 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:13.740 [16/268] Linking static target lib/librte_kvargs.a 00:02:13.740 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:13.740 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:13.740 [19/268] Linking static target lib/librte_log.a 00:02:13.740 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:13.740 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:13.740 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:13.740 [23/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:13.740 [24/268] Linking static target lib/librte_pci.a 00:02:13.740 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:13.740 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:14.002 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:14.002 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:14.002 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:14.002 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:14.002 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:14.002 [32/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:14.002 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:14.002 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:14.002 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:14.002 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:14.002 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:14.002 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:14.002 [39/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:14.002 [40/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:14.002 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:14.002 [42/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:14.002 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:14.002 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:14.002 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:14.002 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:14.002 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:14.002 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:14.002 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:14.002 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:14.002 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:14.002 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:14.002 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:14.002 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:14.002 [55/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:14.002 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:14.002 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:14.002 [58/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:14.002 [59/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:14.002 [60/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:14.002 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:14.002 [62/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:14.002 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:14.002 [64/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:14.002 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:14.002 [66/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:14.002 [67/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:14.002 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:14.002 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:14.002 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:14.002 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:14.002 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:14.002 [73/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:14.002 [74/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.002 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:14.002 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:14.002 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:14.002 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:14.002 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:14.002 [80/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:14.002 [81/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:14.002 [82/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:14.260 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:14.260 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:14.261 [85/268] Linking static target lib/librte_meter.a 00:02:14.261 [86/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:14.261 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:14.261 [88/268] Linking static target lib/librte_ring.a 00:02:14.261 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:14.261 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:14.261 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:14.261 [92/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:14.261 [93/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:14.261 [94/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:14.261 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:14.261 [96/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:14.261 [97/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:14.261 [98/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:14.261 [99/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:14.261 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:14.261 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:14.261 [102/268] Linking static target lib/librte_telemetry.a 00:02:14.261 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:14.261 [104/268] Linking static target lib/librte_rcu.a 00:02:14.261 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:14.261 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:14.261 [107/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:14.261 [108/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:14.261 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:14.261 [110/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:14.261 [111/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:14.261 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:14.261 [113/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:14.261 [114/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:14.261 [115/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.261 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:14.261 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:14.261 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:14.261 [119/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:14.261 [120/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:14.261 [121/268] Linking static target lib/librte_net.a 00:02:14.261 [122/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:14.261 [123/268] Linking static target lib/librte_eal.a 00:02:14.261 [124/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:14.261 [125/268] Linking static target lib/librte_mempool.a 00:02:14.261 [126/268] Linking static target lib/librte_cmdline.a 00:02:14.261 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:14.261 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:14.261 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:14.261 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:14.261 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:14.261 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:14.261 [133/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:14.519 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:14.519 [135/268] Linking static target lib/librte_mbuf.a 00:02:14.519 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:14.519 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:14.519 [138/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.519 [139/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.519 [140/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.519 [141/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:14.519 [142/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:14.519 [143/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:14.519 [144/268] Linking target lib/librte_log.so.24.1 00:02:14.519 [145/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:14.519 [146/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:14.519 [147/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:14.519 [148/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:14.519 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:14.519 [150/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.519 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:14.519 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:14.519 [153/268] Linking static target lib/librte_timer.a 00:02:14.519 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:14.519 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:14.519 [156/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.519 [157/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:14.519 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:14.519 [159/268] Linking static target lib/librte_dmadev.a 00:02:14.519 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:14.519 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:14.520 [162/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:14.520 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:14.520 [164/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:14.520 [165/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:14.520 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:14.520 [167/268] Linking static target lib/librte_reorder.a 00:02:14.520 [168/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:14.520 [169/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:14.520 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:14.520 [171/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:14.520 [172/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:14.520 [173/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.520 [174/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:14.520 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:14.520 [176/268] Linking static target lib/librte_compressdev.a 00:02:14.520 [177/268] Linking target lib/librte_kvargs.so.24.1 00:02:14.520 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:14.520 [179/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:14.520 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:14.520 [181/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:14.520 [182/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:14.778 [183/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:14.778 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:14.779 [185/268] Linking static target lib/librte_security.a 00:02:14.779 [186/268] Linking static target lib/librte_power.a 00:02:14.779 [187/268] Linking target lib/librte_telemetry.so.24.1 00:02:14.779 [188/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:14.779 [189/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:14.779 [190/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:14.779 [191/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:14.779 [192/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:14.779 [193/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:14.779 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:14.779 [195/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:14.779 [196/268] Linking static target lib/librte_hash.a 00:02:14.779 [197/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:14.779 [198/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:14.779 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:14.779 [200/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:14.779 [201/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:14.779 [202/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:14.779 [203/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:14.779 [204/268] Linking static target drivers/librte_mempool_ring.a 00:02:14.779 [205/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:14.779 [206/268] Linking static target drivers/librte_bus_vdev.a 00:02:14.779 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:14.779 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.037 [209/268] Linking static target drivers/librte_bus_pci.a 00:02:15.037 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:15.037 [211/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.037 [212/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:15.037 [213/268] Linking static target lib/librte_cryptodev.a 00:02:15.037 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.038 [215/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.038 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.295 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.295 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.295 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.295 [220/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.295 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:15.295 [222/268] Linking static target lib/librte_ethdev.a 00:02:15.295 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.554 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:15.554 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.554 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.812 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.379 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:16.379 [229/268] Linking static target lib/librte_vhost.a 00:02:16.946 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.324 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.600 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.168 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.427 [234/268] Linking target lib/librte_eal.so.24.1 00:02:24.427 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:24.427 [236/268] Linking target lib/librte_ring.so.24.1 00:02:24.427 [237/268] Linking target lib/librte_meter.so.24.1 00:02:24.427 [238/268] Linking target lib/librte_timer.so.24.1 00:02:24.427 [239/268] Linking target lib/librte_pci.so.24.1 00:02:24.427 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:24.427 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:24.685 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:24.685 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:24.685 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:24.685 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:24.685 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:24.685 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:24.685 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:24.685 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:24.945 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:24.945 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:24.945 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:24.945 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:24.945 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:24.945 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:24.945 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:24.945 [257/268] Linking target lib/librte_net.so.24.1 00:02:24.945 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:25.204 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:25.204 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:25.204 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:25.204 [262/268] Linking target lib/librte_hash.so.24.1 00:02:25.204 [263/268] Linking target lib/librte_security.so.24.1 00:02:25.204 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:25.464 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:25.464 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:25.464 [267/268] Linking target lib/librte_power.so.24.1 00:02:25.464 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:25.464 INFO: autodetecting backend as ninja 00:02:25.464 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:37.680 CC lib/log/log.o 00:02:37.680 CC lib/ut_mock/mock.o 00:02:37.680 CC lib/log/log_flags.o 00:02:37.680 CC lib/log/log_deprecated.o 00:02:37.680 CC lib/ut/ut.o 00:02:37.680 LIB libspdk_ut_mock.a 00:02:37.680 LIB libspdk_ut.a 00:02:37.680 LIB libspdk_log.a 00:02:37.680 SO libspdk_ut_mock.so.6.0 00:02:37.680 SO libspdk_ut.so.2.0 00:02:37.680 SO libspdk_log.so.7.1 00:02:37.680 SYMLINK libspdk_ut_mock.so 00:02:37.680 SYMLINK libspdk_ut.so 00:02:37.680 SYMLINK libspdk_log.so 00:02:37.680 CC lib/util/bit_array.o 00:02:37.680 CC lib/util/base64.o 00:02:37.680 CC lib/util/cpuset.o 00:02:37.680 CC lib/util/crc16.o 00:02:37.680 CC lib/util/crc32.o 00:02:37.680 CC lib/util/crc32c.o 00:02:37.680 CXX lib/trace_parser/trace.o 00:02:37.680 CC lib/util/crc32_ieee.o 00:02:37.680 CC lib/util/crc64.o 00:02:37.680 CC lib/util/dif.o 00:02:37.680 CC lib/util/fd.o 00:02:37.680 CC lib/dma/dma.o 00:02:37.680 CC lib/util/file.o 00:02:37.680 CC lib/util/fd_group.o 00:02:37.680 CC lib/ioat/ioat.o 00:02:37.680 CC lib/util/hexlify.o 00:02:37.680 CC lib/util/iov.o 00:02:37.680 CC lib/util/math.o 00:02:37.680 CC lib/util/net.o 00:02:37.680 CC lib/util/pipe.o 00:02:37.680 CC lib/util/strerror_tls.o 00:02:37.680 CC lib/util/string.o 00:02:37.680 CC lib/util/uuid.o 00:02:37.680 CC lib/util/xor.o 00:02:37.680 CC lib/util/zipf.o 00:02:37.680 CC lib/util/md5.o 00:02:37.680 CC lib/vfio_user/host/vfio_user_pci.o 00:02:37.680 CC lib/vfio_user/host/vfio_user.o 00:02:37.680 LIB libspdk_dma.a 00:02:37.680 SO libspdk_dma.so.5.0 00:02:37.680 LIB libspdk_ioat.a 00:02:37.680 SO libspdk_ioat.so.7.0 00:02:37.680 SYMLINK libspdk_dma.so 00:02:37.680 LIB libspdk_vfio_user.a 00:02:37.680 SYMLINK libspdk_ioat.so 00:02:37.680 SO libspdk_vfio_user.so.5.0 00:02:37.680 LIB libspdk_util.a 00:02:37.680 SYMLINK libspdk_vfio_user.so 00:02:37.680 SO libspdk_util.so.10.1 00:02:37.680 SYMLINK libspdk_util.so 00:02:37.680 LIB libspdk_trace_parser.a 00:02:37.680 SO libspdk_trace_parser.so.6.0 00:02:37.680 SYMLINK libspdk_trace_parser.so 00:02:37.680 CC lib/json/json_parse.o 00:02:37.680 CC lib/vmd/vmd.o 00:02:37.680 CC lib/json/json_util.o 00:02:37.680 CC lib/json/json_write.o 00:02:37.680 CC lib/vmd/led.o 00:02:37.680 CC lib/rdma_utils/rdma_utils.o 00:02:37.680 CC lib/env_dpdk/env.o 00:02:37.680 CC lib/conf/conf.o 00:02:37.680 CC lib/idxd/idxd.o 00:02:37.680 CC lib/env_dpdk/memory.o 00:02:37.680 CC lib/env_dpdk/pci.o 00:02:37.680 CC lib/idxd/idxd_user.o 00:02:37.680 CC lib/env_dpdk/init.o 00:02:37.680 CC lib/idxd/idxd_kernel.o 00:02:37.680 CC lib/env_dpdk/threads.o 00:02:37.680 CC lib/env_dpdk/pci_ioat.o 00:02:37.680 CC lib/env_dpdk/pci_virtio.o 00:02:37.680 CC lib/env_dpdk/pci_vmd.o 00:02:37.680 CC lib/env_dpdk/pci_idxd.o 00:02:37.680 CC lib/env_dpdk/pci_event.o 00:02:37.680 CC lib/env_dpdk/sigbus_handler.o 00:02:37.680 CC lib/env_dpdk/pci_dpdk.o 00:02:37.680 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:37.680 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:37.939 LIB libspdk_conf.a 00:02:37.939 LIB libspdk_rdma_utils.a 00:02:37.939 SO libspdk_conf.so.6.0 00:02:37.939 LIB libspdk_json.a 00:02:37.939 SO libspdk_rdma_utils.so.1.0 00:02:37.939 SO libspdk_json.so.6.0 00:02:37.939 SYMLINK libspdk_conf.so 00:02:37.939 SYMLINK libspdk_rdma_utils.so 00:02:37.939 SYMLINK libspdk_json.so 00:02:38.198 LIB libspdk_vmd.a 00:02:38.198 LIB libspdk_idxd.a 00:02:38.198 SO libspdk_vmd.so.6.0 00:02:38.198 SO libspdk_idxd.so.12.1 00:02:38.198 SYMLINK libspdk_vmd.so 00:02:38.198 SYMLINK libspdk_idxd.so 00:02:38.198 CC lib/rdma_provider/common.o 00:02:38.198 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:38.198 CC lib/jsonrpc/jsonrpc_server.o 00:02:38.198 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:38.198 CC lib/jsonrpc/jsonrpc_client.o 00:02:38.198 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:38.457 LIB libspdk_rdma_provider.a 00:02:38.457 SO libspdk_rdma_provider.so.7.0 00:02:38.457 LIB libspdk_jsonrpc.a 00:02:38.457 SO libspdk_jsonrpc.so.6.0 00:02:38.457 SYMLINK libspdk_rdma_provider.so 00:02:38.717 SYMLINK libspdk_jsonrpc.so 00:02:38.717 LIB libspdk_env_dpdk.a 00:02:38.717 SO libspdk_env_dpdk.so.15.1 00:02:38.717 SYMLINK libspdk_env_dpdk.so 00:02:38.977 CC lib/rpc/rpc.o 00:02:38.977 LIB libspdk_rpc.a 00:02:39.237 SO libspdk_rpc.so.6.0 00:02:39.237 SYMLINK libspdk_rpc.so 00:02:39.497 CC lib/notify/notify.o 00:02:39.497 CC lib/notify/notify_rpc.o 00:02:39.497 CC lib/trace/trace.o 00:02:39.497 CC lib/trace/trace_flags.o 00:02:39.497 CC lib/trace/trace_rpc.o 00:02:39.497 CC lib/keyring/keyring.o 00:02:39.497 CC lib/keyring/keyring_rpc.o 00:02:39.756 LIB libspdk_notify.a 00:02:39.756 SO libspdk_notify.so.6.0 00:02:39.756 LIB libspdk_keyring.a 00:02:39.756 LIB libspdk_trace.a 00:02:39.756 SO libspdk_keyring.so.2.0 00:02:39.756 SO libspdk_trace.so.11.0 00:02:39.756 SYMLINK libspdk_notify.so 00:02:39.756 SYMLINK libspdk_keyring.so 00:02:39.756 SYMLINK libspdk_trace.so 00:02:40.324 CC lib/sock/sock.o 00:02:40.324 CC lib/sock/sock_rpc.o 00:02:40.324 CC lib/thread/thread.o 00:02:40.324 CC lib/thread/iobuf.o 00:02:40.584 LIB libspdk_sock.a 00:02:40.584 SO libspdk_sock.so.10.0 00:02:40.584 SYMLINK libspdk_sock.so 00:02:40.843 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:40.843 CC lib/nvme/nvme_ctrlr.o 00:02:40.843 CC lib/nvme/nvme_fabric.o 00:02:40.843 CC lib/nvme/nvme_ns_cmd.o 00:02:40.843 CC lib/nvme/nvme_ns.o 00:02:40.843 CC lib/nvme/nvme_pcie_common.o 00:02:40.843 CC lib/nvme/nvme_pcie.o 00:02:40.843 CC lib/nvme/nvme_qpair.o 00:02:40.843 CC lib/nvme/nvme.o 00:02:40.843 CC lib/nvme/nvme_quirks.o 00:02:40.843 CC lib/nvme/nvme_transport.o 00:02:40.843 CC lib/nvme/nvme_discovery.o 00:02:40.843 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:40.843 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:40.843 CC lib/nvme/nvme_tcp.o 00:02:40.843 CC lib/nvme/nvme_opal.o 00:02:40.843 CC lib/nvme/nvme_io_msg.o 00:02:40.843 CC lib/nvme/nvme_poll_group.o 00:02:40.843 CC lib/nvme/nvme_zns.o 00:02:40.843 CC lib/nvme/nvme_stubs.o 00:02:40.843 CC lib/nvme/nvme_auth.o 00:02:40.843 CC lib/nvme/nvme_cuse.o 00:02:40.843 CC lib/nvme/nvme_vfio_user.o 00:02:40.843 CC lib/nvme/nvme_rdma.o 00:02:41.103 LIB libspdk_thread.a 00:02:41.361 SO libspdk_thread.so.11.0 00:02:41.361 SYMLINK libspdk_thread.so 00:02:41.619 CC lib/init/json_config.o 00:02:41.619 CC lib/init/subsystem.o 00:02:41.619 CC lib/blob/blobstore.o 00:02:41.619 CC lib/init/rpc.o 00:02:41.619 CC lib/init/subsystem_rpc.o 00:02:41.619 CC lib/blob/request.o 00:02:41.619 CC lib/blob/blob_bs_dev.o 00:02:41.619 CC lib/accel/accel.o 00:02:41.619 CC lib/blob/zeroes.o 00:02:41.619 CC lib/accel/accel_rpc.o 00:02:41.619 CC lib/accel/accel_sw.o 00:02:41.619 CC lib/virtio/virtio.o 00:02:41.619 CC lib/virtio/virtio_vhost_user.o 00:02:41.619 CC lib/virtio/virtio_vfio_user.o 00:02:41.619 CC lib/vfu_tgt/tgt_rpc.o 00:02:41.619 CC lib/virtio/virtio_pci.o 00:02:41.619 CC lib/vfu_tgt/tgt_endpoint.o 00:02:41.619 CC lib/fsdev/fsdev.o 00:02:41.619 CC lib/fsdev/fsdev_io.o 00:02:41.619 CC lib/fsdev/fsdev_rpc.o 00:02:41.877 LIB libspdk_init.a 00:02:41.877 SO libspdk_init.so.6.0 00:02:41.877 LIB libspdk_vfu_tgt.a 00:02:41.877 LIB libspdk_virtio.a 00:02:41.877 SO libspdk_vfu_tgt.so.3.0 00:02:41.877 SYMLINK libspdk_init.so 00:02:41.877 SO libspdk_virtio.so.7.0 00:02:41.877 SYMLINK libspdk_vfu_tgt.so 00:02:42.137 SYMLINK libspdk_virtio.so 00:02:42.137 LIB libspdk_fsdev.a 00:02:42.137 SO libspdk_fsdev.so.2.0 00:02:42.137 CC lib/event/app.o 00:02:42.137 CC lib/event/reactor.o 00:02:42.137 CC lib/event/log_rpc.o 00:02:42.137 CC lib/event/app_rpc.o 00:02:42.137 CC lib/event/scheduler_static.o 00:02:42.137 SYMLINK libspdk_fsdev.so 00:02:42.396 LIB libspdk_accel.a 00:02:42.396 SO libspdk_accel.so.16.0 00:02:42.686 SYMLINK libspdk_accel.so 00:02:42.686 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:42.686 LIB libspdk_nvme.a 00:02:42.686 LIB libspdk_event.a 00:02:42.686 SO libspdk_event.so.14.0 00:02:42.686 SO libspdk_nvme.so.15.0 00:02:42.686 SYMLINK libspdk_event.so 00:02:43.023 CC lib/bdev/bdev.o 00:02:43.023 CC lib/bdev/bdev_rpc.o 00:02:43.023 CC lib/bdev/bdev_zone.o 00:02:43.023 CC lib/bdev/part.o 00:02:43.023 CC lib/bdev/scsi_nvme.o 00:02:43.023 SYMLINK libspdk_nvme.so 00:02:43.023 LIB libspdk_fuse_dispatcher.a 00:02:43.023 SO libspdk_fuse_dispatcher.so.1.0 00:02:43.387 SYMLINK libspdk_fuse_dispatcher.so 00:02:43.678 LIB libspdk_blob.a 00:02:43.937 SO libspdk_blob.so.11.0 00:02:43.937 SYMLINK libspdk_blob.so 00:02:44.196 CC lib/lvol/lvol.o 00:02:44.196 CC lib/blobfs/blobfs.o 00:02:44.196 CC lib/blobfs/tree.o 00:02:44.764 LIB libspdk_bdev.a 00:02:44.764 SO libspdk_bdev.so.17.0 00:02:44.764 LIB libspdk_blobfs.a 00:02:44.764 SO libspdk_blobfs.so.10.0 00:02:44.764 SYMLINK libspdk_bdev.so 00:02:44.764 LIB libspdk_lvol.a 00:02:45.023 SYMLINK libspdk_blobfs.so 00:02:45.023 SO libspdk_lvol.so.10.0 00:02:45.023 SYMLINK libspdk_lvol.so 00:02:45.023 CC lib/nbd/nbd.o 00:02:45.023 CC lib/nbd/nbd_rpc.o 00:02:45.281 CC lib/nvmf/ctrlr.o 00:02:45.281 CC lib/scsi/dev.o 00:02:45.281 CC lib/nvmf/ctrlr_discovery.o 00:02:45.281 CC lib/scsi/lun.o 00:02:45.281 CC lib/nvmf/ctrlr_bdev.o 00:02:45.281 CC lib/ublk/ublk.o 00:02:45.281 CC lib/nvmf/subsystem.o 00:02:45.281 CC lib/scsi/port.o 00:02:45.281 CC lib/ublk/ublk_rpc.o 00:02:45.281 CC lib/scsi/scsi.o 00:02:45.281 CC lib/nvmf/nvmf.o 00:02:45.281 CC lib/ftl/ftl_core.o 00:02:45.281 CC lib/scsi/scsi_bdev.o 00:02:45.281 CC lib/nvmf/nvmf_rpc.o 00:02:45.281 CC lib/ftl/ftl_init.o 00:02:45.281 CC lib/scsi/scsi_pr.o 00:02:45.281 CC lib/nvmf/transport.o 00:02:45.281 CC lib/ftl/ftl_layout.o 00:02:45.281 CC lib/scsi/scsi_rpc.o 00:02:45.281 CC lib/nvmf/tcp.o 00:02:45.281 CC lib/scsi/task.o 00:02:45.281 CC lib/ftl/ftl_debug.o 00:02:45.281 CC lib/nvmf/stubs.o 00:02:45.281 CC lib/ftl/ftl_io.o 00:02:45.281 CC lib/ftl/ftl_sb.o 00:02:45.281 CC lib/nvmf/mdns_server.o 00:02:45.281 CC lib/nvmf/vfio_user.o 00:02:45.281 CC lib/ftl/ftl_l2p.o 00:02:45.281 CC lib/ftl/ftl_l2p_flat.o 00:02:45.281 CC lib/nvmf/rdma.o 00:02:45.281 CC lib/ftl/ftl_nv_cache.o 00:02:45.281 CC lib/nvmf/auth.o 00:02:45.281 CC lib/ftl/ftl_band.o 00:02:45.281 CC lib/ftl/ftl_band_ops.o 00:02:45.281 CC lib/ftl/ftl_writer.o 00:02:45.281 CC lib/ftl/ftl_rq.o 00:02:45.281 CC lib/ftl/ftl_l2p_cache.o 00:02:45.281 CC lib/ftl/ftl_reloc.o 00:02:45.281 CC lib/ftl/ftl_p2l.o 00:02:45.281 CC lib/ftl/ftl_p2l_log.o 00:02:45.281 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:45.281 CC lib/ftl/mngt/ftl_mngt.o 00:02:45.281 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:45.281 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:45.281 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:45.281 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:45.281 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:45.281 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:45.281 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:45.281 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:45.281 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:45.281 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:45.281 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:45.281 CC lib/ftl/utils/ftl_conf.o 00:02:45.281 CC lib/ftl/utils/ftl_md.o 00:02:45.281 CC lib/ftl/utils/ftl_mempool.o 00:02:45.281 CC lib/ftl/utils/ftl_bitmap.o 00:02:45.281 CC lib/ftl/utils/ftl_property.o 00:02:45.281 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:45.281 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:45.281 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:45.281 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:45.281 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:45.281 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:45.281 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:45.281 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:45.281 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:45.281 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:45.281 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:45.281 CC lib/ftl/base/ftl_base_dev.o 00:02:45.281 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:45.281 CC lib/ftl/ftl_trace.o 00:02:45.281 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:45.281 CC lib/ftl/base/ftl_base_bdev.o 00:02:45.847 LIB libspdk_nbd.a 00:02:45.847 SO libspdk_nbd.so.7.0 00:02:45.847 SYMLINK libspdk_nbd.so 00:02:45.847 LIB libspdk_scsi.a 00:02:45.847 SO libspdk_scsi.so.9.0 00:02:45.847 LIB libspdk_ublk.a 00:02:46.105 SYMLINK libspdk_scsi.so 00:02:46.105 SO libspdk_ublk.so.3.0 00:02:46.105 SYMLINK libspdk_ublk.so 00:02:46.105 LIB libspdk_ftl.a 00:02:46.364 CC lib/vhost/vhost.o 00:02:46.364 CC lib/vhost/vhost_rpc.o 00:02:46.364 CC lib/vhost/vhost_scsi.o 00:02:46.364 CC lib/iscsi/conn.o 00:02:46.364 CC lib/vhost/vhost_blk.o 00:02:46.364 CC lib/iscsi/init_grp.o 00:02:46.364 CC lib/iscsi/iscsi.o 00:02:46.364 CC lib/vhost/rte_vhost_user.o 00:02:46.364 CC lib/iscsi/param.o 00:02:46.364 CC lib/iscsi/portal_grp.o 00:02:46.364 CC lib/iscsi/tgt_node.o 00:02:46.364 CC lib/iscsi/iscsi_subsystem.o 00:02:46.364 CC lib/iscsi/iscsi_rpc.o 00:02:46.364 CC lib/iscsi/task.o 00:02:46.364 SO libspdk_ftl.so.9.0 00:02:46.623 SYMLINK libspdk_ftl.so 00:02:46.881 LIB libspdk_nvmf.a 00:02:46.881 SO libspdk_nvmf.so.20.0 00:02:47.140 LIB libspdk_vhost.a 00:02:47.140 SYMLINK libspdk_nvmf.so 00:02:47.140 SO libspdk_vhost.so.8.0 00:02:47.140 SYMLINK libspdk_vhost.so 00:02:47.400 LIB libspdk_iscsi.a 00:02:47.400 SO libspdk_iscsi.so.8.0 00:02:47.400 SYMLINK libspdk_iscsi.so 00:02:47.968 CC module/env_dpdk/env_dpdk_rpc.o 00:02:47.968 CC module/vfu_device/vfu_virtio.o 00:02:47.968 CC module/vfu_device/vfu_virtio_blk.o 00:02:47.968 CC module/vfu_device/vfu_virtio_scsi.o 00:02:47.968 CC module/vfu_device/vfu_virtio_rpc.o 00:02:47.968 CC module/vfu_device/vfu_virtio_fs.o 00:02:48.226 CC module/blob/bdev/blob_bdev.o 00:02:48.226 LIB libspdk_env_dpdk_rpc.a 00:02:48.226 CC module/scheduler/gscheduler/gscheduler.o 00:02:48.226 CC module/keyring/file/keyring_rpc.o 00:02:48.226 CC module/keyring/file/keyring.o 00:02:48.226 CC module/accel/dsa/accel_dsa.o 00:02:48.226 CC module/accel/ioat/accel_ioat.o 00:02:48.226 CC module/accel/ioat/accel_ioat_rpc.o 00:02:48.226 CC module/accel/dsa/accel_dsa_rpc.o 00:02:48.226 CC module/keyring/linux/keyring.o 00:02:48.226 CC module/keyring/linux/keyring_rpc.o 00:02:48.226 CC module/sock/posix/posix.o 00:02:48.226 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:48.227 CC module/accel/error/accel_error.o 00:02:48.227 CC module/accel/error/accel_error_rpc.o 00:02:48.227 CC module/accel/iaa/accel_iaa.o 00:02:48.227 CC module/fsdev/aio/fsdev_aio.o 00:02:48.227 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:48.227 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:48.227 CC module/fsdev/aio/linux_aio_mgr.o 00:02:48.227 CC module/accel/iaa/accel_iaa_rpc.o 00:02:48.227 SO libspdk_env_dpdk_rpc.so.6.0 00:02:48.227 SYMLINK libspdk_env_dpdk_rpc.so 00:02:48.227 LIB libspdk_keyring_linux.a 00:02:48.227 LIB libspdk_scheduler_gscheduler.a 00:02:48.227 LIB libspdk_keyring_file.a 00:02:48.227 LIB libspdk_scheduler_dpdk_governor.a 00:02:48.227 LIB libspdk_accel_ioat.a 00:02:48.227 SO libspdk_scheduler_gscheduler.so.4.0 00:02:48.227 SO libspdk_keyring_linux.so.1.0 00:02:48.227 SO libspdk_keyring_file.so.2.0 00:02:48.485 SO libspdk_accel_ioat.so.6.0 00:02:48.485 LIB libspdk_scheduler_dynamic.a 00:02:48.485 LIB libspdk_accel_error.a 00:02:48.485 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:48.485 LIB libspdk_blob_bdev.a 00:02:48.485 SYMLINK libspdk_scheduler_gscheduler.so 00:02:48.485 LIB libspdk_accel_iaa.a 00:02:48.485 SO libspdk_scheduler_dynamic.so.4.0 00:02:48.485 SYMLINK libspdk_keyring_file.so 00:02:48.485 SYMLINK libspdk_keyring_linux.so 00:02:48.485 SO libspdk_accel_error.so.2.0 00:02:48.485 SO libspdk_accel_iaa.so.3.0 00:02:48.485 SYMLINK libspdk_accel_ioat.so 00:02:48.485 SO libspdk_blob_bdev.so.11.0 00:02:48.485 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:48.485 LIB libspdk_accel_dsa.a 00:02:48.485 SYMLINK libspdk_scheduler_dynamic.so 00:02:48.485 SO libspdk_accel_dsa.so.5.0 00:02:48.485 SYMLINK libspdk_accel_error.so 00:02:48.485 SYMLINK libspdk_blob_bdev.so 00:02:48.485 SYMLINK libspdk_accel_iaa.so 00:02:48.485 LIB libspdk_vfu_device.a 00:02:48.485 SYMLINK libspdk_accel_dsa.so 00:02:48.485 SO libspdk_vfu_device.so.3.0 00:02:48.743 SYMLINK libspdk_vfu_device.so 00:02:48.743 LIB libspdk_fsdev_aio.a 00:02:48.743 SO libspdk_fsdev_aio.so.1.0 00:02:48.743 LIB libspdk_sock_posix.a 00:02:48.743 SO libspdk_sock_posix.so.6.0 00:02:48.743 SYMLINK libspdk_fsdev_aio.so 00:02:48.743 SYMLINK libspdk_sock_posix.so 00:02:49.001 CC module/blobfs/bdev/blobfs_bdev.o 00:02:49.001 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:49.001 CC module/bdev/error/vbdev_error.o 00:02:49.001 CC module/bdev/error/vbdev_error_rpc.o 00:02:49.001 CC module/bdev/delay/vbdev_delay.o 00:02:49.001 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:49.001 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:49.001 CC module/bdev/malloc/bdev_malloc.o 00:02:49.001 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:49.001 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:49.001 CC module/bdev/gpt/vbdev_gpt.o 00:02:49.001 CC module/bdev/gpt/gpt.o 00:02:49.001 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:49.001 CC module/bdev/passthru/vbdev_passthru.o 00:02:49.001 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:49.001 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:49.001 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:49.001 CC module/bdev/null/bdev_null.o 00:02:49.001 CC module/bdev/null/bdev_null_rpc.o 00:02:49.001 CC module/bdev/nvme/bdev_nvme.o 00:02:49.001 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:49.001 CC module/bdev/nvme/nvme_rpc.o 00:02:49.001 CC module/bdev/lvol/vbdev_lvol.o 00:02:49.001 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:49.001 CC module/bdev/raid/bdev_raid.o 00:02:49.001 CC module/bdev/nvme/bdev_mdns_client.o 00:02:49.001 CC module/bdev/nvme/vbdev_opal.o 00:02:49.001 CC module/bdev/split/vbdev_split.o 00:02:49.001 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:49.001 CC module/bdev/raid/bdev_raid_rpc.o 00:02:49.001 CC module/bdev/raid/bdev_raid_sb.o 00:02:49.001 CC module/bdev/split/vbdev_split_rpc.o 00:02:49.001 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:49.001 CC module/bdev/raid/raid1.o 00:02:49.001 CC module/bdev/raid/raid0.o 00:02:49.001 CC module/bdev/raid/concat.o 00:02:49.001 CC module/bdev/ftl/bdev_ftl.o 00:02:49.001 CC module/bdev/aio/bdev_aio.o 00:02:49.001 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:49.001 CC module/bdev/aio/bdev_aio_rpc.o 00:02:49.001 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:49.001 CC module/bdev/iscsi/bdev_iscsi.o 00:02:49.260 LIB libspdk_blobfs_bdev.a 00:02:49.260 SO libspdk_blobfs_bdev.so.6.0 00:02:49.260 LIB libspdk_bdev_gpt.a 00:02:49.260 LIB libspdk_bdev_split.a 00:02:49.260 LIB libspdk_bdev_null.a 00:02:49.260 LIB libspdk_bdev_error.a 00:02:49.260 LIB libspdk_bdev_passthru.a 00:02:49.260 SO libspdk_bdev_gpt.so.6.0 00:02:49.260 SYMLINK libspdk_blobfs_bdev.so 00:02:49.260 SO libspdk_bdev_split.so.6.0 00:02:49.260 LIB libspdk_bdev_ftl.a 00:02:49.260 SO libspdk_bdev_null.so.6.0 00:02:49.260 SO libspdk_bdev_passthru.so.6.0 00:02:49.260 SO libspdk_bdev_error.so.6.0 00:02:49.260 LIB libspdk_bdev_zone_block.a 00:02:49.260 LIB libspdk_bdev_aio.a 00:02:49.260 SO libspdk_bdev_ftl.so.6.0 00:02:49.260 LIB libspdk_bdev_malloc.a 00:02:49.260 SO libspdk_bdev_zone_block.so.6.0 00:02:49.260 SYMLINK libspdk_bdev_gpt.so 00:02:49.260 SYMLINK libspdk_bdev_split.so 00:02:49.260 LIB libspdk_bdev_delay.a 00:02:49.260 SYMLINK libspdk_bdev_null.so 00:02:49.260 SO libspdk_bdev_aio.so.6.0 00:02:49.260 LIB libspdk_bdev_iscsi.a 00:02:49.260 SYMLINK libspdk_bdev_passthru.so 00:02:49.260 SYMLINK libspdk_bdev_error.so 00:02:49.260 SO libspdk_bdev_malloc.so.6.0 00:02:49.260 SO libspdk_bdev_delay.so.6.0 00:02:49.260 SYMLINK libspdk_bdev_ftl.so 00:02:49.519 SO libspdk_bdev_iscsi.so.6.0 00:02:49.519 SYMLINK libspdk_bdev_zone_block.so 00:02:49.519 SYMLINK libspdk_bdev_aio.so 00:02:49.519 SYMLINK libspdk_bdev_delay.so 00:02:49.519 LIB libspdk_bdev_virtio.a 00:02:49.519 SYMLINK libspdk_bdev_malloc.so 00:02:49.519 LIB libspdk_bdev_lvol.a 00:02:49.519 SYMLINK libspdk_bdev_iscsi.so 00:02:49.519 SO libspdk_bdev_virtio.so.6.0 00:02:49.519 SO libspdk_bdev_lvol.so.6.0 00:02:49.519 SYMLINK libspdk_bdev_lvol.so 00:02:49.519 SYMLINK libspdk_bdev_virtio.so 00:02:49.778 LIB libspdk_bdev_raid.a 00:02:49.778 SO libspdk_bdev_raid.so.6.0 00:02:50.037 SYMLINK libspdk_bdev_raid.so 00:02:50.975 LIB libspdk_bdev_nvme.a 00:02:50.975 SO libspdk_bdev_nvme.so.7.1 00:02:50.975 SYMLINK libspdk_bdev_nvme.so 00:02:51.544 CC module/event/subsystems/vmd/vmd.o 00:02:51.544 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:51.544 CC module/event/subsystems/iobuf/iobuf.o 00:02:51.544 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:51.544 CC module/event/subsystems/keyring/keyring.o 00:02:51.544 CC module/event/subsystems/sock/sock.o 00:02:51.544 CC module/event/subsystems/scheduler/scheduler.o 00:02:51.544 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:51.544 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:51.544 CC module/event/subsystems/fsdev/fsdev.o 00:02:51.804 LIB libspdk_event_iobuf.a 00:02:51.804 LIB libspdk_event_vhost_blk.a 00:02:51.804 LIB libspdk_event_vfu_tgt.a 00:02:51.804 LIB libspdk_event_vmd.a 00:02:51.804 LIB libspdk_event_keyring.a 00:02:51.804 LIB libspdk_event_fsdev.a 00:02:51.804 LIB libspdk_event_sock.a 00:02:51.804 LIB libspdk_event_scheduler.a 00:02:51.804 SO libspdk_event_vhost_blk.so.3.0 00:02:51.804 SO libspdk_event_vfu_tgt.so.3.0 00:02:51.804 SO libspdk_event_keyring.so.1.0 00:02:51.804 SO libspdk_event_iobuf.so.3.0 00:02:51.804 SO libspdk_event_vmd.so.6.0 00:02:51.804 SO libspdk_event_fsdev.so.1.0 00:02:51.804 SO libspdk_event_sock.so.5.0 00:02:51.804 SO libspdk_event_scheduler.so.4.0 00:02:51.804 SYMLINK libspdk_event_vhost_blk.so 00:02:51.804 SYMLINK libspdk_event_vfu_tgt.so 00:02:51.804 SYMLINK libspdk_event_iobuf.so 00:02:51.804 SYMLINK libspdk_event_keyring.so 00:02:51.804 SYMLINK libspdk_event_vmd.so 00:02:51.804 SYMLINK libspdk_event_fsdev.so 00:02:51.804 SYMLINK libspdk_event_sock.so 00:02:51.804 SYMLINK libspdk_event_scheduler.so 00:02:52.064 CC module/event/subsystems/accel/accel.o 00:02:52.324 LIB libspdk_event_accel.a 00:02:52.324 SO libspdk_event_accel.so.6.0 00:02:52.324 SYMLINK libspdk_event_accel.so 00:02:52.583 CC module/event/subsystems/bdev/bdev.o 00:02:52.843 LIB libspdk_event_bdev.a 00:02:52.843 SO libspdk_event_bdev.so.6.0 00:02:52.843 SYMLINK libspdk_event_bdev.so 00:02:53.458 CC module/event/subsystems/scsi/scsi.o 00:02:53.458 CC module/event/subsystems/nbd/nbd.o 00:02:53.458 CC module/event/subsystems/ublk/ublk.o 00:02:53.458 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:53.458 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:53.458 LIB libspdk_event_scsi.a 00:02:53.458 LIB libspdk_event_nbd.a 00:02:53.458 LIB libspdk_event_ublk.a 00:02:53.458 SO libspdk_event_scsi.so.6.0 00:02:53.458 SO libspdk_event_nbd.so.6.0 00:02:53.458 SO libspdk_event_ublk.so.3.0 00:02:53.458 LIB libspdk_event_nvmf.a 00:02:53.458 SYMLINK libspdk_event_scsi.so 00:02:53.458 SYMLINK libspdk_event_nbd.so 00:02:53.458 SYMLINK libspdk_event_ublk.so 00:02:53.458 SO libspdk_event_nvmf.so.6.0 00:02:53.458 SYMLINK libspdk_event_nvmf.so 00:02:53.717 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:53.717 CC module/event/subsystems/iscsi/iscsi.o 00:02:53.976 LIB libspdk_event_vhost_scsi.a 00:02:53.976 LIB libspdk_event_iscsi.a 00:02:53.976 SO libspdk_event_vhost_scsi.so.3.0 00:02:53.976 SO libspdk_event_iscsi.so.6.0 00:02:53.976 SYMLINK libspdk_event_vhost_scsi.so 00:02:53.976 SYMLINK libspdk_event_iscsi.so 00:02:54.235 SO libspdk.so.6.0 00:02:54.235 SYMLINK libspdk.so 00:02:54.493 CC test/rpc_client/rpc_client_test.o 00:02:54.493 CXX app/trace/trace.o 00:02:54.493 CC app/spdk_top/spdk_top.o 00:02:54.493 CC app/trace_record/trace_record.o 00:02:54.493 CC app/spdk_lspci/spdk_lspci.o 00:02:54.493 CC app/spdk_nvme_discover/discovery_aer.o 00:02:54.493 CC app/spdk_nvme_identify/identify.o 00:02:54.758 CC app/spdk_nvme_perf/perf.o 00:02:54.758 TEST_HEADER include/spdk/accel_module.h 00:02:54.758 TEST_HEADER include/spdk/accel.h 00:02:54.758 TEST_HEADER include/spdk/assert.h 00:02:54.758 TEST_HEADER include/spdk/barrier.h 00:02:54.758 TEST_HEADER include/spdk/base64.h 00:02:54.758 TEST_HEADER include/spdk/bdev_zone.h 00:02:54.758 TEST_HEADER include/spdk/bdev.h 00:02:54.758 TEST_HEADER include/spdk/bdev_module.h 00:02:54.758 TEST_HEADER include/spdk/bit_array.h 00:02:54.758 TEST_HEADER include/spdk/bit_pool.h 00:02:54.758 TEST_HEADER include/spdk/blob_bdev.h 00:02:54.758 TEST_HEADER include/spdk/blob.h 00:02:54.758 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:54.758 TEST_HEADER include/spdk/conf.h 00:02:54.758 TEST_HEADER include/spdk/blobfs.h 00:02:54.758 TEST_HEADER include/spdk/config.h 00:02:54.758 TEST_HEADER include/spdk/cpuset.h 00:02:54.758 TEST_HEADER include/spdk/crc16.h 00:02:54.758 TEST_HEADER include/spdk/crc32.h 00:02:54.758 TEST_HEADER include/spdk/crc64.h 00:02:54.758 TEST_HEADER include/spdk/dif.h 00:02:54.758 TEST_HEADER include/spdk/dma.h 00:02:54.758 TEST_HEADER include/spdk/endian.h 00:02:54.758 TEST_HEADER include/spdk/env_dpdk.h 00:02:54.758 CC app/nvmf_tgt/nvmf_main.o 00:02:54.758 TEST_HEADER include/spdk/env.h 00:02:54.758 TEST_HEADER include/spdk/fd_group.h 00:02:54.758 TEST_HEADER include/spdk/event.h 00:02:54.758 TEST_HEADER include/spdk/fd.h 00:02:54.758 TEST_HEADER include/spdk/file.h 00:02:54.758 TEST_HEADER include/spdk/fsdev.h 00:02:54.758 TEST_HEADER include/spdk/fsdev_module.h 00:02:54.758 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:54.758 TEST_HEADER include/spdk/ftl.h 00:02:54.758 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:54.758 CC app/iscsi_tgt/iscsi_tgt.o 00:02:54.758 TEST_HEADER include/spdk/gpt_spec.h 00:02:54.758 TEST_HEADER include/spdk/hexlify.h 00:02:54.758 CC app/spdk_dd/spdk_dd.o 00:02:54.758 TEST_HEADER include/spdk/idxd_spec.h 00:02:54.758 TEST_HEADER include/spdk/idxd.h 00:02:54.758 TEST_HEADER include/spdk/histogram_data.h 00:02:54.758 TEST_HEADER include/spdk/ioat.h 00:02:54.758 TEST_HEADER include/spdk/init.h 00:02:54.758 TEST_HEADER include/spdk/ioat_spec.h 00:02:54.758 TEST_HEADER include/spdk/iscsi_spec.h 00:02:54.758 TEST_HEADER include/spdk/json.h 00:02:54.758 TEST_HEADER include/spdk/keyring.h 00:02:54.758 TEST_HEADER include/spdk/jsonrpc.h 00:02:54.758 TEST_HEADER include/spdk/log.h 00:02:54.758 TEST_HEADER include/spdk/likely.h 00:02:54.758 TEST_HEADER include/spdk/keyring_module.h 00:02:54.758 TEST_HEADER include/spdk/lvol.h 00:02:54.758 TEST_HEADER include/spdk/md5.h 00:02:54.758 TEST_HEADER include/spdk/memory.h 00:02:54.758 TEST_HEADER include/spdk/mmio.h 00:02:54.758 TEST_HEADER include/spdk/net.h 00:02:54.758 TEST_HEADER include/spdk/nbd.h 00:02:54.758 TEST_HEADER include/spdk/notify.h 00:02:54.758 TEST_HEADER include/spdk/nvme.h 00:02:54.758 TEST_HEADER include/spdk/nvme_intel.h 00:02:54.758 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:54.759 TEST_HEADER include/spdk/nvme_spec.h 00:02:54.759 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:54.759 TEST_HEADER include/spdk/nvme_zns.h 00:02:54.759 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:54.759 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:54.759 TEST_HEADER include/spdk/nvmf_spec.h 00:02:54.759 TEST_HEADER include/spdk/nvmf.h 00:02:54.759 TEST_HEADER include/spdk/pci_ids.h 00:02:54.759 TEST_HEADER include/spdk/opal_spec.h 00:02:54.759 TEST_HEADER include/spdk/opal.h 00:02:54.759 TEST_HEADER include/spdk/pipe.h 00:02:54.759 TEST_HEADER include/spdk/nvmf_transport.h 00:02:54.759 TEST_HEADER include/spdk/queue.h 00:02:54.759 TEST_HEADER include/spdk/reduce.h 00:02:54.759 TEST_HEADER include/spdk/scheduler.h 00:02:54.759 TEST_HEADER include/spdk/rpc.h 00:02:54.759 TEST_HEADER include/spdk/scsi_spec.h 00:02:54.759 TEST_HEADER include/spdk/string.h 00:02:54.759 TEST_HEADER include/spdk/scsi.h 00:02:54.759 TEST_HEADER include/spdk/thread.h 00:02:54.759 TEST_HEADER include/spdk/stdinc.h 00:02:54.759 TEST_HEADER include/spdk/sock.h 00:02:54.759 TEST_HEADER include/spdk/trace.h 00:02:54.759 CC app/spdk_tgt/spdk_tgt.o 00:02:54.759 TEST_HEADER include/spdk/tree.h 00:02:54.759 TEST_HEADER include/spdk/ublk.h 00:02:54.759 TEST_HEADER include/spdk/trace_parser.h 00:02:54.759 TEST_HEADER include/spdk/uuid.h 00:02:54.759 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:54.759 TEST_HEADER include/spdk/util.h 00:02:54.759 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:54.759 TEST_HEADER include/spdk/version.h 00:02:54.759 TEST_HEADER include/spdk/vhost.h 00:02:54.759 TEST_HEADER include/spdk/vmd.h 00:02:54.759 TEST_HEADER include/spdk/zipf.h 00:02:54.759 TEST_HEADER include/spdk/xor.h 00:02:54.759 CXX test/cpp_headers/accel.o 00:02:54.759 CXX test/cpp_headers/accel_module.o 00:02:54.759 CXX test/cpp_headers/assert.o 00:02:54.759 CXX test/cpp_headers/barrier.o 00:02:54.759 CXX test/cpp_headers/base64.o 00:02:54.759 CXX test/cpp_headers/bdev.o 00:02:54.759 CXX test/cpp_headers/bdev_module.o 00:02:54.759 CXX test/cpp_headers/bit_array.o 00:02:54.759 CXX test/cpp_headers/bdev_zone.o 00:02:54.759 CXX test/cpp_headers/blob_bdev.o 00:02:54.759 CXX test/cpp_headers/bit_pool.o 00:02:54.759 CXX test/cpp_headers/blob.o 00:02:54.759 CXX test/cpp_headers/blobfs_bdev.o 00:02:54.759 CXX test/cpp_headers/blobfs.o 00:02:54.759 CXX test/cpp_headers/config.o 00:02:54.759 CXX test/cpp_headers/conf.o 00:02:54.759 CXX test/cpp_headers/cpuset.o 00:02:54.759 CXX test/cpp_headers/crc16.o 00:02:54.759 CXX test/cpp_headers/crc32.o 00:02:54.759 CXX test/cpp_headers/crc64.o 00:02:54.759 CXX test/cpp_headers/endian.o 00:02:54.759 CXX test/cpp_headers/dma.o 00:02:54.759 CXX test/cpp_headers/env_dpdk.o 00:02:54.759 CXX test/cpp_headers/dif.o 00:02:54.759 CXX test/cpp_headers/env.o 00:02:54.759 CXX test/cpp_headers/fd_group.o 00:02:54.759 CXX test/cpp_headers/event.o 00:02:54.759 CXX test/cpp_headers/fd.o 00:02:54.759 CXX test/cpp_headers/file.o 00:02:54.759 CXX test/cpp_headers/fsdev_module.o 00:02:54.759 CXX test/cpp_headers/fsdev.o 00:02:54.759 CXX test/cpp_headers/fuse_dispatcher.o 00:02:54.759 CXX test/cpp_headers/ftl.o 00:02:54.759 CXX test/cpp_headers/histogram_data.o 00:02:54.759 CXX test/cpp_headers/gpt_spec.o 00:02:54.759 CXX test/cpp_headers/hexlify.o 00:02:54.759 CXX test/cpp_headers/idxd.o 00:02:54.759 CXX test/cpp_headers/init.o 00:02:54.759 CXX test/cpp_headers/idxd_spec.o 00:02:54.759 CXX test/cpp_headers/ioat_spec.o 00:02:54.759 CXX test/cpp_headers/ioat.o 00:02:54.759 CXX test/cpp_headers/iscsi_spec.o 00:02:54.759 CXX test/cpp_headers/json.o 00:02:54.759 CXX test/cpp_headers/jsonrpc.o 00:02:54.759 CXX test/cpp_headers/keyring.o 00:02:54.759 CC test/env/pci/pci_ut.o 00:02:54.759 CXX test/cpp_headers/log.o 00:02:54.759 CXX test/cpp_headers/keyring_module.o 00:02:54.759 CXX test/cpp_headers/likely.o 00:02:54.759 CXX test/cpp_headers/md5.o 00:02:54.759 CXX test/cpp_headers/lvol.o 00:02:54.759 CXX test/cpp_headers/memory.o 00:02:54.759 CXX test/cpp_headers/mmio.o 00:02:54.759 CXX test/cpp_headers/nbd.o 00:02:54.759 CXX test/cpp_headers/nvme.o 00:02:54.759 CXX test/cpp_headers/net.o 00:02:54.759 CXX test/cpp_headers/notify.o 00:02:54.759 CXX test/cpp_headers/nvme_ocssd.o 00:02:54.759 CXX test/cpp_headers/nvme_intel.o 00:02:54.759 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:54.759 CXX test/cpp_headers/nvme_spec.o 00:02:54.759 CXX test/cpp_headers/nvme_zns.o 00:02:54.759 CXX test/cpp_headers/nvmf_cmd.o 00:02:54.759 CXX test/cpp_headers/nvmf.o 00:02:54.759 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:54.759 CXX test/cpp_headers/nvmf_spec.o 00:02:54.759 CXX test/cpp_headers/nvmf_transport.o 00:02:54.759 CXX test/cpp_headers/opal.o 00:02:54.759 CC examples/util/zipf/zipf.o 00:02:54.759 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:54.759 CC test/app/jsoncat/jsoncat.o 00:02:54.759 CC test/env/memory/memory_ut.o 00:02:54.759 CC test/thread/poller_perf/poller_perf.o 00:02:54.759 CC test/env/vtophys/vtophys.o 00:02:54.759 CC test/app/histogram_perf/histogram_perf.o 00:02:54.759 CC test/dma/test_dma/test_dma.o 00:02:54.759 CC examples/ioat/perf/perf.o 00:02:54.759 CC test/app/stub/stub.o 00:02:54.759 CC app/fio/nvme/fio_plugin.o 00:02:54.759 CC examples/ioat/verify/verify.o 00:02:54.759 CC test/app/bdev_svc/bdev_svc.o 00:02:55.022 CC app/fio/bdev/fio_plugin.o 00:02:55.022 LINK spdk_lspci 00:02:55.022 LINK rpc_client_test 00:02:55.022 LINK spdk_nvme_discover 00:02:55.286 LINK iscsi_tgt 00:02:55.286 CC test/env/mem_callbacks/mem_callbacks.o 00:02:55.286 LINK nvmf_tgt 00:02:55.286 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:55.286 LINK spdk_trace_record 00:02:55.286 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:55.286 LINK poller_perf 00:02:55.286 CXX test/cpp_headers/opal_spec.o 00:02:55.286 CXX test/cpp_headers/pci_ids.o 00:02:55.286 CXX test/cpp_headers/pipe.o 00:02:55.286 CXX test/cpp_headers/queue.o 00:02:55.286 CXX test/cpp_headers/reduce.o 00:02:55.286 LINK interrupt_tgt 00:02:55.286 LINK stub 00:02:55.286 CXX test/cpp_headers/rpc.o 00:02:55.286 CXX test/cpp_headers/scheduler.o 00:02:55.286 LINK jsoncat 00:02:55.286 CXX test/cpp_headers/scsi.o 00:02:55.286 CXX test/cpp_headers/scsi_spec.o 00:02:55.286 CXX test/cpp_headers/sock.o 00:02:55.286 CXX test/cpp_headers/stdinc.o 00:02:55.286 CXX test/cpp_headers/string.o 00:02:55.286 LINK zipf 00:02:55.286 CXX test/cpp_headers/thread.o 00:02:55.286 LINK histogram_perf 00:02:55.286 CXX test/cpp_headers/trace.o 00:02:55.286 CXX test/cpp_headers/trace_parser.o 00:02:55.286 CXX test/cpp_headers/tree.o 00:02:55.286 CXX test/cpp_headers/ublk.o 00:02:55.286 CXX test/cpp_headers/util.o 00:02:55.286 CXX test/cpp_headers/uuid.o 00:02:55.286 CXX test/cpp_headers/version.o 00:02:55.286 CXX test/cpp_headers/vfio_user_pci.o 00:02:55.286 CXX test/cpp_headers/vfio_user_spec.o 00:02:55.286 CXX test/cpp_headers/vhost.o 00:02:55.286 CXX test/cpp_headers/vmd.o 00:02:55.286 LINK vtophys 00:02:55.286 CXX test/cpp_headers/xor.o 00:02:55.286 CXX test/cpp_headers/zipf.o 00:02:55.286 LINK ioat_perf 00:02:55.286 LINK spdk_tgt 00:02:55.545 LINK env_dpdk_post_init 00:02:55.545 LINK verify 00:02:55.545 LINK bdev_svc 00:02:55.545 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:55.545 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:55.545 LINK spdk_dd 00:02:55.545 LINK pci_ut 00:02:55.545 LINK spdk_trace 00:02:55.805 LINK nvme_fuzz 00:02:55.805 LINK test_dma 00:02:55.805 CC test/event/event_perf/event_perf.o 00:02:55.805 CC test/event/reactor/reactor.o 00:02:55.805 LINK spdk_bdev 00:02:55.805 CC test/event/reactor_perf/reactor_perf.o 00:02:55.805 CC test/event/app_repeat/app_repeat.o 00:02:55.805 CC examples/vmd/lsvmd/lsvmd.o 00:02:55.805 CC examples/idxd/perf/perf.o 00:02:55.805 CC examples/vmd/led/led.o 00:02:55.805 CC test/event/scheduler/scheduler.o 00:02:55.805 LINK spdk_nvme_identify 00:02:55.805 CC examples/sock/hello_world/hello_sock.o 00:02:55.805 CC examples/thread/thread/thread_ex.o 00:02:55.805 LINK spdk_nvme 00:02:55.805 LINK vhost_fuzz 00:02:55.805 LINK mem_callbacks 00:02:55.805 LINK spdk_nvme_perf 00:02:56.063 LINK event_perf 00:02:56.063 LINK reactor_perf 00:02:56.063 LINK reactor 00:02:56.063 LINK lsvmd 00:02:56.063 LINK led 00:02:56.063 LINK app_repeat 00:02:56.063 LINK spdk_top 00:02:56.063 CC app/vhost/vhost.o 00:02:56.063 LINK scheduler 00:02:56.063 LINK hello_sock 00:02:56.063 LINK thread 00:02:56.063 LINK idxd_perf 00:02:56.321 CC test/nvme/fused_ordering/fused_ordering.o 00:02:56.321 CC test/nvme/simple_copy/simple_copy.o 00:02:56.321 CC test/nvme/err_injection/err_injection.o 00:02:56.321 CC test/nvme/connect_stress/connect_stress.o 00:02:56.321 CC test/nvme/aer/aer.o 00:02:56.321 CC test/nvme/startup/startup.o 00:02:56.321 CC test/nvme/cuse/cuse.o 00:02:56.321 CC test/nvme/overhead/overhead.o 00:02:56.321 CC test/nvme/compliance/nvme_compliance.o 00:02:56.321 CC test/nvme/reset/reset.o 00:02:56.321 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:56.321 CC test/nvme/sgl/sgl.o 00:02:56.321 LINK vhost 00:02:56.321 CC test/nvme/e2edp/nvme_dp.o 00:02:56.321 CC test/nvme/fdp/fdp.o 00:02:56.321 CC test/nvme/boot_partition/boot_partition.o 00:02:56.321 CC test/nvme/reserve/reserve.o 00:02:56.321 CC test/accel/dif/dif.o 00:02:56.321 CC test/blobfs/mkfs/mkfs.o 00:02:56.321 LINK memory_ut 00:02:56.321 CC test/lvol/esnap/esnap.o 00:02:56.321 LINK connect_stress 00:02:56.321 LINK err_injection 00:02:56.321 LINK startup 00:02:56.321 LINK boot_partition 00:02:56.321 LINK fused_ordering 00:02:56.321 LINK doorbell_aers 00:02:56.580 LINK reserve 00:02:56.580 LINK simple_copy 00:02:56.580 LINK reset 00:02:56.580 LINK aer 00:02:56.580 LINK mkfs 00:02:56.580 LINK nvme_dp 00:02:56.580 LINK overhead 00:02:56.580 LINK sgl 00:02:56.580 LINK nvme_compliance 00:02:56.580 LINK fdp 00:02:56.580 CC examples/nvme/hotplug/hotplug.o 00:02:56.580 CC examples/nvme/abort/abort.o 00:02:56.580 CC examples/nvme/arbitration/arbitration.o 00:02:56.580 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:56.580 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:56.580 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:56.580 CC examples/nvme/reconnect/reconnect.o 00:02:56.580 CC examples/nvme/hello_world/hello_world.o 00:02:56.580 CC examples/accel/perf/accel_perf.o 00:02:56.580 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:56.580 CC examples/blob/cli/blobcli.o 00:02:56.580 CC examples/blob/hello_world/hello_blob.o 00:02:56.838 LINK pmr_persistence 00:02:56.838 LINK cmb_copy 00:02:56.838 LINK hotplug 00:02:56.838 LINK hello_world 00:02:56.838 LINK dif 00:02:56.838 LINK arbitration 00:02:56.838 LINK abort 00:02:56.838 LINK iscsi_fuzz 00:02:56.838 LINK hello_fsdev 00:02:56.838 LINK hello_blob 00:02:56.838 LINK reconnect 00:02:57.097 LINK nvme_manage 00:02:57.097 LINK accel_perf 00:02:57.097 LINK blobcli 00:02:57.355 LINK cuse 00:02:57.355 CC test/bdev/bdevio/bdevio.o 00:02:57.613 CC examples/bdev/hello_world/hello_bdev.o 00:02:57.613 CC examples/bdev/bdevperf/bdevperf.o 00:02:57.613 LINK bdevio 00:02:57.871 LINK hello_bdev 00:02:58.130 LINK bdevperf 00:02:58.698 CC examples/nvmf/nvmf/nvmf.o 00:02:58.956 LINK nvmf 00:02:59.893 LINK esnap 00:03:00.152 00:03:00.152 real 0m55.671s 00:03:00.152 user 8m0.123s 00:03:00.152 sys 3m39.078s 00:03:00.152 12:55:03 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:00.152 12:55:03 make -- common/autotest_common.sh@10 -- $ set +x 00:03:00.152 ************************************ 00:03:00.152 END TEST make 00:03:00.152 ************************************ 00:03:00.152 12:55:03 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:00.152 12:55:03 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:00.152 12:55:03 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:00.152 12:55:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.152 12:55:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:00.152 12:55:03 -- pm/common@44 -- $ pid=2567328 00:03:00.152 12:55:03 -- pm/common@50 -- $ kill -TERM 2567328 00:03:00.152 12:55:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.152 12:55:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:00.152 12:55:03 -- pm/common@44 -- $ pid=2567329 00:03:00.152 12:55:03 -- pm/common@50 -- $ kill -TERM 2567329 00:03:00.152 12:55:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.152 12:55:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:00.152 12:55:03 -- pm/common@44 -- $ pid=2567331 00:03:00.152 12:55:03 -- pm/common@50 -- $ kill -TERM 2567331 00:03:00.152 12:55:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.152 12:55:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:00.152 12:55:03 -- pm/common@44 -- $ pid=2567355 00:03:00.152 12:55:03 -- pm/common@50 -- $ sudo -E kill -TERM 2567355 00:03:00.152 12:55:03 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:00.152 12:55:03 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:00.412 12:55:03 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:00.412 12:55:03 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:00.412 12:55:03 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:00.412 12:55:03 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:00.412 12:55:03 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:00.412 12:55:03 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:00.412 12:55:03 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:00.412 12:55:03 -- scripts/common.sh@336 -- # IFS=.-: 00:03:00.412 12:55:03 -- scripts/common.sh@336 -- # read -ra ver1 00:03:00.412 12:55:03 -- scripts/common.sh@337 -- # IFS=.-: 00:03:00.412 12:55:03 -- scripts/common.sh@337 -- # read -ra ver2 00:03:00.412 12:55:03 -- scripts/common.sh@338 -- # local 'op=<' 00:03:00.412 12:55:03 -- scripts/common.sh@340 -- # ver1_l=2 00:03:00.412 12:55:03 -- scripts/common.sh@341 -- # ver2_l=1 00:03:00.412 12:55:03 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:00.412 12:55:03 -- scripts/common.sh@344 -- # case "$op" in 00:03:00.412 12:55:03 -- scripts/common.sh@345 -- # : 1 00:03:00.412 12:55:03 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:00.412 12:55:03 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:00.412 12:55:03 -- scripts/common.sh@365 -- # decimal 1 00:03:00.412 12:55:03 -- scripts/common.sh@353 -- # local d=1 00:03:00.412 12:55:03 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:00.412 12:55:03 -- scripts/common.sh@355 -- # echo 1 00:03:00.412 12:55:03 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:00.412 12:55:03 -- scripts/common.sh@366 -- # decimal 2 00:03:00.412 12:55:03 -- scripts/common.sh@353 -- # local d=2 00:03:00.412 12:55:03 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:00.412 12:55:03 -- scripts/common.sh@355 -- # echo 2 00:03:00.412 12:55:03 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:00.412 12:55:03 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:00.412 12:55:03 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:00.412 12:55:03 -- scripts/common.sh@368 -- # return 0 00:03:00.412 12:55:03 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:00.412 12:55:03 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:00.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.412 --rc genhtml_branch_coverage=1 00:03:00.412 --rc genhtml_function_coverage=1 00:03:00.412 --rc genhtml_legend=1 00:03:00.412 --rc geninfo_all_blocks=1 00:03:00.412 --rc geninfo_unexecuted_blocks=1 00:03:00.412 00:03:00.412 ' 00:03:00.412 12:55:03 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:00.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.412 --rc genhtml_branch_coverage=1 00:03:00.412 --rc genhtml_function_coverage=1 00:03:00.412 --rc genhtml_legend=1 00:03:00.412 --rc geninfo_all_blocks=1 00:03:00.412 --rc geninfo_unexecuted_blocks=1 00:03:00.412 00:03:00.412 ' 00:03:00.412 12:55:03 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:00.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.413 --rc genhtml_branch_coverage=1 00:03:00.413 --rc genhtml_function_coverage=1 00:03:00.413 --rc genhtml_legend=1 00:03:00.413 --rc geninfo_all_blocks=1 00:03:00.413 --rc geninfo_unexecuted_blocks=1 00:03:00.413 00:03:00.413 ' 00:03:00.413 12:55:03 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:00.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.413 --rc genhtml_branch_coverage=1 00:03:00.413 --rc genhtml_function_coverage=1 00:03:00.413 --rc genhtml_legend=1 00:03:00.413 --rc geninfo_all_blocks=1 00:03:00.413 --rc geninfo_unexecuted_blocks=1 00:03:00.413 00:03:00.413 ' 00:03:00.413 12:55:03 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:00.413 12:55:03 -- nvmf/common.sh@7 -- # uname -s 00:03:00.413 12:55:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:00.413 12:55:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:00.413 12:55:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:00.413 12:55:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:00.413 12:55:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:00.413 12:55:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:00.413 12:55:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:00.413 12:55:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:00.413 12:55:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:00.413 12:55:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:00.413 12:55:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:00.413 12:55:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:00.413 12:55:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:00.413 12:55:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:00.413 12:55:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:00.413 12:55:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:00.413 12:55:03 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:00.413 12:55:03 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:00.413 12:55:03 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:00.413 12:55:03 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:00.413 12:55:03 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:00.413 12:55:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.413 12:55:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.413 12:55:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.413 12:55:03 -- paths/export.sh@5 -- # export PATH 00:03:00.413 12:55:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:00.413 12:55:03 -- nvmf/common.sh@51 -- # : 0 00:03:00.413 12:55:03 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:00.413 12:55:03 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:00.413 12:55:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:00.413 12:55:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:00.413 12:55:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:00.413 12:55:03 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:00.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:00.413 12:55:03 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:00.413 12:55:03 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:00.413 12:55:03 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:00.413 12:55:03 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:00.413 12:55:03 -- spdk/autotest.sh@32 -- # uname -s 00:03:00.413 12:55:03 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:00.413 12:55:03 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:00.413 12:55:03 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:00.413 12:55:03 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:00.413 12:55:03 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:00.413 12:55:03 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:00.413 12:55:03 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:00.413 12:55:03 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:00.413 12:55:03 -- spdk/autotest.sh@48 -- # udevadm_pid=2630322 00:03:00.413 12:55:03 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:00.413 12:55:03 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:00.413 12:55:03 -- pm/common@17 -- # local monitor 00:03:00.413 12:55:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.413 12:55:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.413 12:55:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.413 12:55:03 -- pm/common@21 -- # date +%s 00:03:00.413 12:55:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:00.413 12:55:03 -- pm/common@21 -- # date +%s 00:03:00.413 12:55:03 -- pm/common@25 -- # sleep 1 00:03:00.413 12:55:03 -- pm/common@21 -- # date +%s 00:03:00.413 12:55:03 -- pm/common@21 -- # date +%s 00:03:00.413 12:55:03 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732017303 00:03:00.413 12:55:03 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732017303 00:03:00.413 12:55:03 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732017303 00:03:00.413 12:55:03 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732017303 00:03:00.413 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732017303_collect-cpu-load.pm.log 00:03:00.413 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732017303_collect-vmstat.pm.log 00:03:00.413 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732017303_collect-cpu-temp.pm.log 00:03:00.413 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732017303_collect-bmc-pm.bmc.pm.log 00:03:01.349 12:55:04 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:01.349 12:55:04 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:01.349 12:55:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:01.349 12:55:04 -- common/autotest_common.sh@10 -- # set +x 00:03:01.349 12:55:04 -- spdk/autotest.sh@59 -- # create_test_list 00:03:01.349 12:55:04 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:01.349 12:55:04 -- common/autotest_common.sh@10 -- # set +x 00:03:01.608 12:55:04 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:01.608 12:55:04 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:01.608 12:55:04 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:01.608 12:55:04 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:01.608 12:55:04 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:01.608 12:55:04 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:01.608 12:55:04 -- common/autotest_common.sh@1457 -- # uname 00:03:01.608 12:55:04 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:01.608 12:55:04 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:01.608 12:55:04 -- common/autotest_common.sh@1477 -- # uname 00:03:01.608 12:55:04 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:01.608 12:55:04 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:01.608 12:55:04 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:01.608 lcov: LCOV version 1.15 00:03:01.608 12:55:04 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:19.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:19.703 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:27.859 12:55:29 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:27.859 12:55:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:27.859 12:55:29 -- common/autotest_common.sh@10 -- # set +x 00:03:27.859 12:55:29 -- spdk/autotest.sh@78 -- # rm -f 00:03:27.859 12:55:29 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:29.237 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:29.237 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:29.237 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:29.237 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:29.495 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:29.495 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:29.495 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:29.495 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:29.495 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:29.495 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:29.495 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:29.495 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:29.495 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:29.495 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:29.495 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:29.495 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:29.754 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:29.754 12:55:32 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:29.754 12:55:32 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:29.754 12:55:32 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:29.754 12:55:32 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:29.754 12:55:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:29.754 12:55:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:29.754 12:55:32 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:29.754 12:55:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:29.754 12:55:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:29.754 12:55:32 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:29.754 12:55:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:29.754 12:55:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:29.754 12:55:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:29.754 12:55:32 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:29.754 12:55:32 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:29.754 No valid GPT data, bailing 00:03:29.754 12:55:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:29.754 12:55:33 -- scripts/common.sh@394 -- # pt= 00:03:29.754 12:55:33 -- scripts/common.sh@395 -- # return 1 00:03:29.754 12:55:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:29.754 1+0 records in 00:03:29.754 1+0 records out 00:03:29.754 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00153494 s, 683 MB/s 00:03:29.754 12:55:33 -- spdk/autotest.sh@105 -- # sync 00:03:29.754 12:55:33 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:29.754 12:55:33 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:29.754 12:55:33 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:36.328 12:55:38 -- spdk/autotest.sh@111 -- # uname -s 00:03:36.328 12:55:38 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:36.328 12:55:38 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:36.328 12:55:38 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:38.236 Hugepages 00:03:38.236 node hugesize free / total 00:03:38.236 node0 1048576kB 0 / 0 00:03:38.236 node0 2048kB 0 / 0 00:03:38.236 node1 1048576kB 0 / 0 00:03:38.236 node1 2048kB 0 / 0 00:03:38.236 00:03:38.236 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:38.236 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:38.236 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:38.236 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:38.236 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:38.236 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:38.236 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:38.236 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:38.236 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:38.236 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:38.236 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:38.236 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:38.236 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:38.236 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:38.236 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:38.236 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:38.236 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:38.236 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:38.236 12:55:41 -- spdk/autotest.sh@117 -- # uname -s 00:03:38.236 12:55:41 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:38.236 12:55:41 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:38.236 12:55:41 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:41.637 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:41.637 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:41.637 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:41.637 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:41.637 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:41.637 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:41.637 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:41.637 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:41.637 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:41.637 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:41.637 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:41.637 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:41.637 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:41.637 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:41.637 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:41.637 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:41.896 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:42.154 12:55:45 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:43.091 12:55:46 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:43.091 12:55:46 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:43.091 12:55:46 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:43.091 12:55:46 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:43.091 12:55:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:43.091 12:55:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:43.091 12:55:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:43.091 12:55:46 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:43.091 12:55:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:43.091 12:55:46 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:43.091 12:55:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:43.091 12:55:46 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.379 Waiting for block devices as requested 00:03:46.379 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:46.379 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:46.379 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:46.379 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:46.379 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:46.379 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:46.379 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:46.639 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:46.639 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:46.639 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:46.898 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:46.898 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:46.898 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:46.898 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:47.157 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:47.157 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:47.157 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:47.416 12:55:50 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:47.416 12:55:50 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:47.416 12:55:50 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:47.416 12:55:50 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:47.416 12:55:50 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:47.416 12:55:50 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:47.416 12:55:50 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:47.416 12:55:50 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:47.416 12:55:50 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:47.416 12:55:50 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:47.416 12:55:50 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:47.416 12:55:50 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:47.416 12:55:50 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:47.416 12:55:50 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:47.416 12:55:50 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:47.416 12:55:50 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:47.417 12:55:50 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:47.417 12:55:50 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:47.417 12:55:50 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:47.417 12:55:50 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:47.417 12:55:50 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:47.417 12:55:50 -- common/autotest_common.sh@1543 -- # continue 00:03:47.417 12:55:50 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:47.417 12:55:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:47.417 12:55:50 -- common/autotest_common.sh@10 -- # set +x 00:03:47.417 12:55:50 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:47.417 12:55:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:47.417 12:55:50 -- common/autotest_common.sh@10 -- # set +x 00:03:47.417 12:55:50 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:50.710 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:50.710 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:50.710 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:50.710 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:50.710 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:50.710 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:50.711 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:50.711 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:50.711 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:50.711 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:50.711 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:50.711 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:50.711 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:50.711 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:50.711 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:50.711 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:51.280 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:51.280 12:55:54 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:51.280 12:55:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:51.280 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:03:51.280 12:55:54 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:51.280 12:55:54 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:51.280 12:55:54 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:51.280 12:55:54 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:51.280 12:55:54 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:51.280 12:55:54 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:51.280 12:55:54 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:51.280 12:55:54 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:51.280 12:55:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:51.280 12:55:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:51.280 12:55:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:51.280 12:55:54 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:51.280 12:55:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:51.540 12:55:54 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:51.540 12:55:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:51.540 12:55:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:51.540 12:55:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:51.540 12:55:54 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:51.540 12:55:54 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:51.540 12:55:54 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:51.540 12:55:54 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:51.540 12:55:54 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:51.540 12:55:54 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:51.540 12:55:54 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2644551 00:03:51.540 12:55:54 -- common/autotest_common.sh@1585 -- # waitforlisten 2644551 00:03:51.540 12:55:54 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:51.540 12:55:54 -- common/autotest_common.sh@835 -- # '[' -z 2644551 ']' 00:03:51.540 12:55:54 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:51.540 12:55:54 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:51.540 12:55:54 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:51.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:51.540 12:55:54 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:51.540 12:55:54 -- common/autotest_common.sh@10 -- # set +x 00:03:51.540 [2024-11-19 12:55:54.735955] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:03:51.541 [2024-11-19 12:55:54.736008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2644551 ] 00:03:51.541 [2024-11-19 12:55:54.811935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.541 [2024-11-19 12:55:54.853405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:51.800 12:55:55 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:51.800 12:55:55 -- common/autotest_common.sh@868 -- # return 0 00:03:51.800 12:55:55 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:51.800 12:55:55 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:51.800 12:55:55 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:55.092 nvme0n1 00:03:55.092 12:55:58 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:55.092 [2024-11-19 12:55:58.262157] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:55.092 request: 00:03:55.092 { 00:03:55.092 "nvme_ctrlr_name": "nvme0", 00:03:55.092 "password": "test", 00:03:55.092 "method": "bdev_nvme_opal_revert", 00:03:55.092 "req_id": 1 00:03:55.092 } 00:03:55.092 Got JSON-RPC error response 00:03:55.092 response: 00:03:55.092 { 00:03:55.092 "code": -32602, 00:03:55.092 "message": "Invalid parameters" 00:03:55.092 } 00:03:55.092 12:55:58 -- common/autotest_common.sh@1591 -- # true 00:03:55.092 12:55:58 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:55.092 12:55:58 -- common/autotest_common.sh@1595 -- # killprocess 2644551 00:03:55.092 12:55:58 -- common/autotest_common.sh@954 -- # '[' -z 2644551 ']' 00:03:55.092 12:55:58 -- common/autotest_common.sh@958 -- # kill -0 2644551 00:03:55.092 12:55:58 -- common/autotest_common.sh@959 -- # uname 00:03:55.092 12:55:58 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:55.092 12:55:58 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2644551 00:03:55.092 12:55:58 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:55.092 12:55:58 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:55.092 12:55:58 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2644551' 00:03:55.092 killing process with pid 2644551 00:03:55.092 12:55:58 -- common/autotest_common.sh@973 -- # kill 2644551 00:03:55.092 12:55:58 -- common/autotest_common.sh@978 -- # wait 2644551 00:03:56.998 12:55:59 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:56.998 12:55:59 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:56.998 12:55:59 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:56.998 12:55:59 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:56.998 12:55:59 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:56.998 12:55:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:56.998 12:55:59 -- common/autotest_common.sh@10 -- # set +x 00:03:56.998 12:55:59 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:56.998 12:55:59 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:56.998 12:55:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.998 12:55:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.998 12:55:59 -- common/autotest_common.sh@10 -- # set +x 00:03:56.998 ************************************ 00:03:56.998 START TEST env 00:03:56.998 ************************************ 00:03:56.998 12:55:59 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:56.998 * Looking for test storage... 00:03:56.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:56.998 12:56:00 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:56.998 12:56:00 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:56.998 12:56:00 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:56.998 12:56:00 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:56.998 12:56:00 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:56.998 12:56:00 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:56.998 12:56:00 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:56.998 12:56:00 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.998 12:56:00 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:56.998 12:56:00 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:56.998 12:56:00 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:56.998 12:56:00 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:56.999 12:56:00 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:56.999 12:56:00 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:56.999 12:56:00 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:56.999 12:56:00 env -- scripts/common.sh@344 -- # case "$op" in 00:03:56.999 12:56:00 env -- scripts/common.sh@345 -- # : 1 00:03:56.999 12:56:00 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:56.999 12:56:00 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.999 12:56:00 env -- scripts/common.sh@365 -- # decimal 1 00:03:56.999 12:56:00 env -- scripts/common.sh@353 -- # local d=1 00:03:56.999 12:56:00 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.999 12:56:00 env -- scripts/common.sh@355 -- # echo 1 00:03:56.999 12:56:00 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:56.999 12:56:00 env -- scripts/common.sh@366 -- # decimal 2 00:03:56.999 12:56:00 env -- scripts/common.sh@353 -- # local d=2 00:03:56.999 12:56:00 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.999 12:56:00 env -- scripts/common.sh@355 -- # echo 2 00:03:56.999 12:56:00 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:56.999 12:56:00 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:56.999 12:56:00 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:56.999 12:56:00 env -- scripts/common.sh@368 -- # return 0 00:03:56.999 12:56:00 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.999 12:56:00 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:56.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.999 --rc genhtml_branch_coverage=1 00:03:56.999 --rc genhtml_function_coverage=1 00:03:56.999 --rc genhtml_legend=1 00:03:56.999 --rc geninfo_all_blocks=1 00:03:56.999 --rc geninfo_unexecuted_blocks=1 00:03:56.999 00:03:56.999 ' 00:03:56.999 12:56:00 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:56.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.999 --rc genhtml_branch_coverage=1 00:03:56.999 --rc genhtml_function_coverage=1 00:03:56.999 --rc genhtml_legend=1 00:03:56.999 --rc geninfo_all_blocks=1 00:03:56.999 --rc geninfo_unexecuted_blocks=1 00:03:56.999 00:03:56.999 ' 00:03:56.999 12:56:00 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:56.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.999 --rc genhtml_branch_coverage=1 00:03:56.999 --rc genhtml_function_coverage=1 00:03:56.999 --rc genhtml_legend=1 00:03:56.999 --rc geninfo_all_blocks=1 00:03:56.999 --rc geninfo_unexecuted_blocks=1 00:03:56.999 00:03:56.999 ' 00:03:56.999 12:56:00 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:56.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.999 --rc genhtml_branch_coverage=1 00:03:56.999 --rc genhtml_function_coverage=1 00:03:56.999 --rc genhtml_legend=1 00:03:56.999 --rc geninfo_all_blocks=1 00:03:56.999 --rc geninfo_unexecuted_blocks=1 00:03:56.999 00:03:56.999 ' 00:03:56.999 12:56:00 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:56.999 12:56:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.999 12:56:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.999 12:56:00 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.999 ************************************ 00:03:56.999 START TEST env_memory 00:03:56.999 ************************************ 00:03:56.999 12:56:00 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:56.999 00:03:56.999 00:03:56.999 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.999 http://cunit.sourceforge.net/ 00:03:56.999 00:03:56.999 00:03:56.999 Suite: memory 00:03:56.999 Test: alloc and free memory map ...[2024-11-19 12:56:00.225409] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:56.999 passed 00:03:56.999 Test: mem map translation ...[2024-11-19 12:56:00.243377] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:56.999 [2024-11-19 12:56:00.243393] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:56.999 [2024-11-19 12:56:00.243428] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:56.999 [2024-11-19 12:56:00.243434] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:56.999 passed 00:03:56.999 Test: mem map registration ...[2024-11-19 12:56:00.280118] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:56.999 [2024-11-19 12:56:00.280135] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:56.999 passed 00:03:56.999 Test: mem map adjacent registrations ...passed 00:03:56.999 00:03:56.999 Run Summary: Type Total Ran Passed Failed Inactive 00:03:56.999 suites 1 1 n/a 0 0 00:03:56.999 tests 4 4 4 0 0 00:03:56.999 asserts 152 152 152 0 n/a 00:03:56.999 00:03:56.999 Elapsed time = 0.136 seconds 00:03:56.999 00:03:56.999 real 0m0.149s 00:03:56.999 user 0m0.139s 00:03:56.999 sys 0m0.010s 00:03:56.999 12:56:00 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.999 12:56:00 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:56.999 ************************************ 00:03:56.999 END TEST env_memory 00:03:56.999 ************************************ 00:03:56.999 12:56:00 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:56.999 12:56:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.999 12:56:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.999 12:56:00 env -- common/autotest_common.sh@10 -- # set +x 00:03:57.259 ************************************ 00:03:57.260 START TEST env_vtophys 00:03:57.260 ************************************ 00:03:57.260 12:56:00 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:57.260 EAL: lib.eal log level changed from notice to debug 00:03:57.260 EAL: Detected lcore 0 as core 0 on socket 0 00:03:57.260 EAL: Detected lcore 1 as core 1 on socket 0 00:03:57.260 EAL: Detected lcore 2 as core 2 on socket 0 00:03:57.260 EAL: Detected lcore 3 as core 3 on socket 0 00:03:57.260 EAL: Detected lcore 4 as core 4 on socket 0 00:03:57.260 EAL: Detected lcore 5 as core 5 on socket 0 00:03:57.260 EAL: Detected lcore 6 as core 6 on socket 0 00:03:57.260 EAL: Detected lcore 7 as core 8 on socket 0 00:03:57.260 EAL: Detected lcore 8 as core 9 on socket 0 00:03:57.260 EAL: Detected lcore 9 as core 10 on socket 0 00:03:57.260 EAL: Detected lcore 10 as core 11 on socket 0 00:03:57.260 EAL: Detected lcore 11 as core 12 on socket 0 00:03:57.260 EAL: Detected lcore 12 as core 13 on socket 0 00:03:57.260 EAL: Detected lcore 13 as core 16 on socket 0 00:03:57.260 EAL: Detected lcore 14 as core 17 on socket 0 00:03:57.260 EAL: Detected lcore 15 as core 18 on socket 0 00:03:57.260 EAL: Detected lcore 16 as core 19 on socket 0 00:03:57.260 EAL: Detected lcore 17 as core 20 on socket 0 00:03:57.260 EAL: Detected lcore 18 as core 21 on socket 0 00:03:57.260 EAL: Detected lcore 19 as core 25 on socket 0 00:03:57.260 EAL: Detected lcore 20 as core 26 on socket 0 00:03:57.260 EAL: Detected lcore 21 as core 27 on socket 0 00:03:57.260 EAL: Detected lcore 22 as core 28 on socket 0 00:03:57.260 EAL: Detected lcore 23 as core 29 on socket 0 00:03:57.260 EAL: Detected lcore 24 as core 0 on socket 1 00:03:57.260 EAL: Detected lcore 25 as core 1 on socket 1 00:03:57.260 EAL: Detected lcore 26 as core 2 on socket 1 00:03:57.260 EAL: Detected lcore 27 as core 3 on socket 1 00:03:57.260 EAL: Detected lcore 28 as core 4 on socket 1 00:03:57.260 EAL: Detected lcore 29 as core 5 on socket 1 00:03:57.260 EAL: Detected lcore 30 as core 6 on socket 1 00:03:57.260 EAL: Detected lcore 31 as core 9 on socket 1 00:03:57.260 EAL: Detected lcore 32 as core 10 on socket 1 00:03:57.260 EAL: Detected lcore 33 as core 11 on socket 1 00:03:57.260 EAL: Detected lcore 34 as core 12 on socket 1 00:03:57.260 EAL: Detected lcore 35 as core 13 on socket 1 00:03:57.260 EAL: Detected lcore 36 as core 16 on socket 1 00:03:57.260 EAL: Detected lcore 37 as core 17 on socket 1 00:03:57.260 EAL: Detected lcore 38 as core 18 on socket 1 00:03:57.260 EAL: Detected lcore 39 as core 19 on socket 1 00:03:57.260 EAL: Detected lcore 40 as core 20 on socket 1 00:03:57.260 EAL: Detected lcore 41 as core 21 on socket 1 00:03:57.260 EAL: Detected lcore 42 as core 24 on socket 1 00:03:57.260 EAL: Detected lcore 43 as core 25 on socket 1 00:03:57.260 EAL: Detected lcore 44 as core 26 on socket 1 00:03:57.260 EAL: Detected lcore 45 as core 27 on socket 1 00:03:57.260 EAL: Detected lcore 46 as core 28 on socket 1 00:03:57.260 EAL: Detected lcore 47 as core 29 on socket 1 00:03:57.260 EAL: Detected lcore 48 as core 0 on socket 0 00:03:57.260 EAL: Detected lcore 49 as core 1 on socket 0 00:03:57.260 EAL: Detected lcore 50 as core 2 on socket 0 00:03:57.260 EAL: Detected lcore 51 as core 3 on socket 0 00:03:57.260 EAL: Detected lcore 52 as core 4 on socket 0 00:03:57.260 EAL: Detected lcore 53 as core 5 on socket 0 00:03:57.260 EAL: Detected lcore 54 as core 6 on socket 0 00:03:57.260 EAL: Detected lcore 55 as core 8 on socket 0 00:03:57.260 EAL: Detected lcore 56 as core 9 on socket 0 00:03:57.260 EAL: Detected lcore 57 as core 10 on socket 0 00:03:57.260 EAL: Detected lcore 58 as core 11 on socket 0 00:03:57.260 EAL: Detected lcore 59 as core 12 on socket 0 00:03:57.260 EAL: Detected lcore 60 as core 13 on socket 0 00:03:57.260 EAL: Detected lcore 61 as core 16 on socket 0 00:03:57.260 EAL: Detected lcore 62 as core 17 on socket 0 00:03:57.260 EAL: Detected lcore 63 as core 18 on socket 0 00:03:57.260 EAL: Detected lcore 64 as core 19 on socket 0 00:03:57.260 EAL: Detected lcore 65 as core 20 on socket 0 00:03:57.260 EAL: Detected lcore 66 as core 21 on socket 0 00:03:57.260 EAL: Detected lcore 67 as core 25 on socket 0 00:03:57.260 EAL: Detected lcore 68 as core 26 on socket 0 00:03:57.260 EAL: Detected lcore 69 as core 27 on socket 0 00:03:57.260 EAL: Detected lcore 70 as core 28 on socket 0 00:03:57.260 EAL: Detected lcore 71 as core 29 on socket 0 00:03:57.260 EAL: Detected lcore 72 as core 0 on socket 1 00:03:57.260 EAL: Detected lcore 73 as core 1 on socket 1 00:03:57.260 EAL: Detected lcore 74 as core 2 on socket 1 00:03:57.260 EAL: Detected lcore 75 as core 3 on socket 1 00:03:57.260 EAL: Detected lcore 76 as core 4 on socket 1 00:03:57.260 EAL: Detected lcore 77 as core 5 on socket 1 00:03:57.260 EAL: Detected lcore 78 as core 6 on socket 1 00:03:57.260 EAL: Detected lcore 79 as core 9 on socket 1 00:03:57.260 EAL: Detected lcore 80 as core 10 on socket 1 00:03:57.260 EAL: Detected lcore 81 as core 11 on socket 1 00:03:57.260 EAL: Detected lcore 82 as core 12 on socket 1 00:03:57.260 EAL: Detected lcore 83 as core 13 on socket 1 00:03:57.260 EAL: Detected lcore 84 as core 16 on socket 1 00:03:57.260 EAL: Detected lcore 85 as core 17 on socket 1 00:03:57.260 EAL: Detected lcore 86 as core 18 on socket 1 00:03:57.260 EAL: Detected lcore 87 as core 19 on socket 1 00:03:57.260 EAL: Detected lcore 88 as core 20 on socket 1 00:03:57.260 EAL: Detected lcore 89 as core 21 on socket 1 00:03:57.260 EAL: Detected lcore 90 as core 24 on socket 1 00:03:57.260 EAL: Detected lcore 91 as core 25 on socket 1 00:03:57.260 EAL: Detected lcore 92 as core 26 on socket 1 00:03:57.260 EAL: Detected lcore 93 as core 27 on socket 1 00:03:57.260 EAL: Detected lcore 94 as core 28 on socket 1 00:03:57.260 EAL: Detected lcore 95 as core 29 on socket 1 00:03:57.260 EAL: Maximum logical cores by configuration: 128 00:03:57.260 EAL: Detected CPU lcores: 96 00:03:57.260 EAL: Detected NUMA nodes: 2 00:03:57.260 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:57.260 EAL: Detected shared linkage of DPDK 00:03:57.260 EAL: No shared files mode enabled, IPC will be disabled 00:03:57.260 EAL: Bus pci wants IOVA as 'DC' 00:03:57.260 EAL: Buses did not request a specific IOVA mode. 00:03:57.260 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:57.260 EAL: Selected IOVA mode 'VA' 00:03:57.260 EAL: Probing VFIO support... 00:03:57.260 EAL: IOMMU type 1 (Type 1) is supported 00:03:57.260 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:57.260 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:57.260 EAL: VFIO support initialized 00:03:57.260 EAL: Ask a virtual area of 0x2e000 bytes 00:03:57.260 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:57.260 EAL: Setting up physically contiguous memory... 00:03:57.260 EAL: Setting maximum number of open files to 524288 00:03:57.260 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:57.260 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:57.260 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:57.260 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.260 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:57.260 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:57.260 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.260 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:57.260 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:57.260 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.260 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:57.260 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:57.260 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.260 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:57.260 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:57.260 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.260 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:57.260 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:57.260 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.260 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:57.260 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:57.260 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.260 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:57.260 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:57.260 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.260 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:57.260 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:57.260 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:57.260 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.260 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:57.260 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:57.260 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.260 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:57.260 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:57.260 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.260 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:57.260 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:57.260 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.260 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:57.261 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:57.261 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.261 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:57.261 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:57.261 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.261 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:57.261 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:57.261 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.261 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:57.261 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:57.261 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.261 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:57.261 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:57.261 EAL: Hugepages will be freed exactly as allocated. 00:03:57.261 EAL: No shared files mode enabled, IPC is disabled 00:03:57.261 EAL: No shared files mode enabled, IPC is disabled 00:03:57.261 EAL: TSC frequency is ~2300000 KHz 00:03:57.261 EAL: Main lcore 0 is ready (tid=7ff95ae83a00;cpuset=[0]) 00:03:57.261 EAL: Trying to obtain current memory policy. 00:03:57.261 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.261 EAL: Restoring previous memory policy: 0 00:03:57.261 EAL: request: mp_malloc_sync 00:03:57.261 EAL: No shared files mode enabled, IPC is disabled 00:03:57.261 EAL: Heap on socket 0 was expanded by 2MB 00:03:57.261 EAL: No shared files mode enabled, IPC is disabled 00:03:57.261 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:57.261 EAL: Mem event callback 'spdk:(nil)' registered 00:03:57.261 00:03:57.261 00:03:57.261 CUnit - A unit testing framework for C - Version 2.1-3 00:03:57.261 http://cunit.sourceforge.net/ 00:03:57.261 00:03:57.261 00:03:57.261 Suite: components_suite 00:03:57.261 Test: vtophys_malloc_test ...passed 00:03:57.261 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:57.261 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.261 EAL: Restoring previous memory policy: 4 00:03:57.261 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.261 EAL: request: mp_malloc_sync 00:03:57.261 EAL: No shared files mode enabled, IPC is disabled 00:03:57.261 EAL: Heap on socket 0 was expanded by 4MB 00:03:57.261 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.261 EAL: request: mp_malloc_sync 00:03:57.261 EAL: No shared files mode enabled, IPC is disabled 00:03:57.261 EAL: Heap on socket 0 was shrunk by 4MB 00:03:57.261 EAL: Trying to obtain current memory policy. 00:03:57.261 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.261 EAL: Restoring previous memory policy: 4 00:03:57.261 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.261 EAL: request: mp_malloc_sync 00:03:57.261 EAL: No shared files mode enabled, IPC is disabled 00:03:57.261 EAL: Heap on socket 0 was expanded by 6MB 00:03:57.261 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.261 EAL: request: mp_malloc_sync 00:03:57.261 EAL: No shared files mode enabled, IPC is disabled 00:03:57.261 EAL: Heap on socket 0 was shrunk by 6MB 00:03:57.261 EAL: Trying to obtain current memory policy. 00:03:57.261 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.261 EAL: Restoring previous memory policy: 4 00:03:57.261 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.261 EAL: request: mp_malloc_sync 00:03:57.261 EAL: No shared files mode enabled, IPC is disabled 00:03:57.261 EAL: Heap on socket 0 was expanded by 10MB 00:03:57.261 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.261 EAL: request: mp_malloc_sync 00:03:57.261 EAL: No shared files mode enabled, IPC is disabled 00:03:57.261 EAL: Heap on socket 0 was shrunk by 10MB 00:03:57.261 EAL: Trying to obtain current memory policy. 00:03:57.261 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.261 EAL: Restoring previous memory policy: 4 00:03:57.261 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.261 EAL: request: mp_malloc_sync 00:03:57.261 EAL: No shared files mode enabled, IPC is disabled 00:03:57.261 EAL: Heap on socket 0 was expanded by 18MB 00:03:57.261 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.261 EAL: request: mp_malloc_sync 00:03:57.261 EAL: No shared files mode enabled, IPC is disabled 00:03:57.261 EAL: Heap on socket 0 was shrunk by 18MB 00:03:57.261 EAL: Trying to obtain current memory policy. 00:03:57.261 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.261 EAL: Restoring previous memory policy: 4 00:03:57.261 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.261 EAL: request: mp_malloc_sync 00:03:57.261 EAL: No shared files mode enabled, IPC is disabled 00:03:57.261 EAL: Heap on socket 0 was expanded by 34MB 00:03:57.261 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.261 EAL: request: mp_malloc_sync 00:03:57.261 EAL: No shared files mode enabled, IPC is disabled 00:03:57.261 EAL: Heap on socket 0 was shrunk by 34MB 00:03:57.261 EAL: Trying to obtain current memory policy. 00:03:57.261 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.261 EAL: Restoring previous memory policy: 4 00:03:57.261 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.261 EAL: request: mp_malloc_sync 00:03:57.261 EAL: No shared files mode enabled, IPC is disabled 00:03:57.261 EAL: Heap on socket 0 was expanded by 66MB 00:03:57.261 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.261 EAL: request: mp_malloc_sync 00:03:57.261 EAL: No shared files mode enabled, IPC is disabled 00:03:57.261 EAL: Heap on socket 0 was shrunk by 66MB 00:03:57.261 EAL: Trying to obtain current memory policy. 00:03:57.261 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.261 EAL: Restoring previous memory policy: 4 00:03:57.261 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.261 EAL: request: mp_malloc_sync 00:03:57.261 EAL: No shared files mode enabled, IPC is disabled 00:03:57.261 EAL: Heap on socket 0 was expanded by 130MB 00:03:57.261 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.261 EAL: request: mp_malloc_sync 00:03:57.261 EAL: No shared files mode enabled, IPC is disabled 00:03:57.261 EAL: Heap on socket 0 was shrunk by 130MB 00:03:57.261 EAL: Trying to obtain current memory policy. 00:03:57.261 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.521 EAL: Restoring previous memory policy: 4 00:03:57.521 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.521 EAL: request: mp_malloc_sync 00:03:57.521 EAL: No shared files mode enabled, IPC is disabled 00:03:57.521 EAL: Heap on socket 0 was expanded by 258MB 00:03:57.521 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.521 EAL: request: mp_malloc_sync 00:03:57.521 EAL: No shared files mode enabled, IPC is disabled 00:03:57.521 EAL: Heap on socket 0 was shrunk by 258MB 00:03:57.521 EAL: Trying to obtain current memory policy. 00:03:57.521 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.521 EAL: Restoring previous memory policy: 4 00:03:57.521 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.521 EAL: request: mp_malloc_sync 00:03:57.521 EAL: No shared files mode enabled, IPC is disabled 00:03:57.521 EAL: Heap on socket 0 was expanded by 514MB 00:03:57.781 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.781 EAL: request: mp_malloc_sync 00:03:57.781 EAL: No shared files mode enabled, IPC is disabled 00:03:57.781 EAL: Heap on socket 0 was shrunk by 514MB 00:03:57.781 EAL: Trying to obtain current memory policy. 00:03:57.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.041 EAL: Restoring previous memory policy: 4 00:03:58.041 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.041 EAL: request: mp_malloc_sync 00:03:58.041 EAL: No shared files mode enabled, IPC is disabled 00:03:58.041 EAL: Heap on socket 0 was expanded by 1026MB 00:03:58.041 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.300 EAL: request: mp_malloc_sync 00:03:58.300 EAL: No shared files mode enabled, IPC is disabled 00:03:58.300 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:58.300 passed 00:03:58.300 00:03:58.300 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.300 suites 1 1 n/a 0 0 00:03:58.300 tests 2 2 2 0 0 00:03:58.300 asserts 497 497 497 0 n/a 00:03:58.300 00:03:58.300 Elapsed time = 0.975 seconds 00:03:58.301 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.301 EAL: request: mp_malloc_sync 00:03:58.301 EAL: No shared files mode enabled, IPC is disabled 00:03:58.301 EAL: Heap on socket 0 was shrunk by 2MB 00:03:58.301 EAL: No shared files mode enabled, IPC is disabled 00:03:58.301 EAL: No shared files mode enabled, IPC is disabled 00:03:58.301 EAL: No shared files mode enabled, IPC is disabled 00:03:58.301 00:03:58.301 real 0m1.101s 00:03:58.301 user 0m0.640s 00:03:58.301 sys 0m0.438s 00:03:58.301 12:56:01 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.301 12:56:01 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:58.301 ************************************ 00:03:58.301 END TEST env_vtophys 00:03:58.301 ************************************ 00:03:58.301 12:56:01 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:58.301 12:56:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.301 12:56:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.301 12:56:01 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.301 ************************************ 00:03:58.301 START TEST env_pci 00:03:58.301 ************************************ 00:03:58.301 12:56:01 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:58.301 00:03:58.301 00:03:58.301 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.301 http://cunit.sourceforge.net/ 00:03:58.301 00:03:58.301 00:03:58.301 Suite: pci 00:03:58.301 Test: pci_hook ...[2024-11-19 12:56:01.586645] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2645818 has claimed it 00:03:58.301 EAL: Cannot find device (10000:00:01.0) 00:03:58.301 EAL: Failed to attach device on primary process 00:03:58.301 passed 00:03:58.301 00:03:58.301 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.301 suites 1 1 n/a 0 0 00:03:58.301 tests 1 1 1 0 0 00:03:58.301 asserts 25 25 25 0 n/a 00:03:58.301 00:03:58.301 Elapsed time = 0.026 seconds 00:03:58.301 00:03:58.301 real 0m0.045s 00:03:58.301 user 0m0.012s 00:03:58.301 sys 0m0.032s 00:03:58.301 12:56:01 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.301 12:56:01 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:58.301 ************************************ 00:03:58.301 END TEST env_pci 00:03:58.301 ************************************ 00:03:58.301 12:56:01 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:58.301 12:56:01 env -- env/env.sh@15 -- # uname 00:03:58.301 12:56:01 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:58.301 12:56:01 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:58.301 12:56:01 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:58.301 12:56:01 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:58.301 12:56:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.301 12:56:01 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.561 ************************************ 00:03:58.561 START TEST env_dpdk_post_init 00:03:58.561 ************************************ 00:03:58.561 12:56:01 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:58.561 EAL: Detected CPU lcores: 96 00:03:58.561 EAL: Detected NUMA nodes: 2 00:03:58.561 EAL: Detected shared linkage of DPDK 00:03:58.561 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:58.561 EAL: Selected IOVA mode 'VA' 00:03:58.561 EAL: VFIO support initialized 00:03:58.561 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:58.561 EAL: Using IOMMU type 1 (Type 1) 00:03:58.561 EAL: Ignore mapping IO port bar(1) 00:03:58.561 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:58.561 EAL: Ignore mapping IO port bar(1) 00:03:58.561 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:58.561 EAL: Ignore mapping IO port bar(1) 00:03:58.561 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:58.561 EAL: Ignore mapping IO port bar(1) 00:03:58.561 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:58.561 EAL: Ignore mapping IO port bar(1) 00:03:58.561 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:58.561 EAL: Ignore mapping IO port bar(1) 00:03:58.561 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:58.561 EAL: Ignore mapping IO port bar(1) 00:03:58.561 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:58.561 EAL: Ignore mapping IO port bar(1) 00:03:58.561 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:59.499 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:59.499 EAL: Ignore mapping IO port bar(1) 00:03:59.499 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:59.499 EAL: Ignore mapping IO port bar(1) 00:03:59.499 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:59.499 EAL: Ignore mapping IO port bar(1) 00:03:59.499 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:59.499 EAL: Ignore mapping IO port bar(1) 00:03:59.499 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:59.499 EAL: Ignore mapping IO port bar(1) 00:03:59.499 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:59.499 EAL: Ignore mapping IO port bar(1) 00:03:59.499 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:59.499 EAL: Ignore mapping IO port bar(1) 00:03:59.499 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:59.499 EAL: Ignore mapping IO port bar(1) 00:03:59.499 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:02.788 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:02.788 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:02.788 Starting DPDK initialization... 00:04:02.788 Starting SPDK post initialization... 00:04:02.788 SPDK NVMe probe 00:04:02.788 Attaching to 0000:5e:00.0 00:04:02.788 Attached to 0000:5e:00.0 00:04:02.788 Cleaning up... 00:04:02.788 00:04:02.788 real 0m4.344s 00:04:02.788 user 0m2.960s 00:04:02.788 sys 0m0.453s 00:04:02.788 12:56:06 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.788 12:56:06 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:02.788 ************************************ 00:04:02.788 END TEST env_dpdk_post_init 00:04:02.788 ************************************ 00:04:02.788 12:56:06 env -- env/env.sh@26 -- # uname 00:04:02.788 12:56:06 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:02.788 12:56:06 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:02.788 12:56:06 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.788 12:56:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.788 12:56:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.788 ************************************ 00:04:02.788 START TEST env_mem_callbacks 00:04:02.788 ************************************ 00:04:02.788 12:56:06 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:02.788 EAL: Detected CPU lcores: 96 00:04:02.788 EAL: Detected NUMA nodes: 2 00:04:02.788 EAL: Detected shared linkage of DPDK 00:04:02.788 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:02.788 EAL: Selected IOVA mode 'VA' 00:04:02.788 EAL: VFIO support initialized 00:04:02.788 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:02.788 00:04:02.788 00:04:02.788 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.788 http://cunit.sourceforge.net/ 00:04:02.788 00:04:02.788 00:04:02.788 Suite: memory 00:04:02.788 Test: test ... 00:04:02.788 register 0x200000200000 2097152 00:04:02.788 malloc 3145728 00:04:03.047 register 0x200000400000 4194304 00:04:03.047 buf 0x200000500000 len 3145728 PASSED 00:04:03.047 malloc 64 00:04:03.047 buf 0x2000004fff40 len 64 PASSED 00:04:03.047 malloc 4194304 00:04:03.047 register 0x200000800000 6291456 00:04:03.047 buf 0x200000a00000 len 4194304 PASSED 00:04:03.047 free 0x200000500000 3145728 00:04:03.047 free 0x2000004fff40 64 00:04:03.047 unregister 0x200000400000 4194304 PASSED 00:04:03.047 free 0x200000a00000 4194304 00:04:03.047 unregister 0x200000800000 6291456 PASSED 00:04:03.047 malloc 8388608 00:04:03.047 register 0x200000400000 10485760 00:04:03.047 buf 0x200000600000 len 8388608 PASSED 00:04:03.047 free 0x200000600000 8388608 00:04:03.047 unregister 0x200000400000 10485760 PASSED 00:04:03.047 passed 00:04:03.047 00:04:03.047 Run Summary: Type Total Ran Passed Failed Inactive 00:04:03.047 suites 1 1 n/a 0 0 00:04:03.047 tests 1 1 1 0 0 00:04:03.047 asserts 15 15 15 0 n/a 00:04:03.047 00:04:03.047 Elapsed time = 0.008 seconds 00:04:03.047 00:04:03.047 real 0m0.060s 00:04:03.047 user 0m0.021s 00:04:03.047 sys 0m0.039s 00:04:03.047 12:56:06 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.047 12:56:06 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:03.047 ************************************ 00:04:03.047 END TEST env_mem_callbacks 00:04:03.047 ************************************ 00:04:03.047 00:04:03.047 real 0m6.236s 00:04:03.047 user 0m4.004s 00:04:03.047 sys 0m1.312s 00:04:03.047 12:56:06 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.047 12:56:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.047 ************************************ 00:04:03.047 END TEST env 00:04:03.047 ************************************ 00:04:03.047 12:56:06 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:03.047 12:56:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.047 12:56:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.047 12:56:06 -- common/autotest_common.sh@10 -- # set +x 00:04:03.047 ************************************ 00:04:03.047 START TEST rpc 00:04:03.047 ************************************ 00:04:03.047 12:56:06 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:03.047 * Looking for test storage... 00:04:03.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:03.047 12:56:06 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:03.047 12:56:06 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:03.047 12:56:06 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:03.306 12:56:06 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:03.306 12:56:06 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.306 12:56:06 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.306 12:56:06 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.306 12:56:06 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.306 12:56:06 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.306 12:56:06 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.306 12:56:06 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.306 12:56:06 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.306 12:56:06 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.306 12:56:06 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.306 12:56:06 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.306 12:56:06 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:03.306 12:56:06 rpc -- scripts/common.sh@345 -- # : 1 00:04:03.306 12:56:06 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.306 12:56:06 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.306 12:56:06 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:03.306 12:56:06 rpc -- scripts/common.sh@353 -- # local d=1 00:04:03.306 12:56:06 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.306 12:56:06 rpc -- scripts/common.sh@355 -- # echo 1 00:04:03.306 12:56:06 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.306 12:56:06 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:03.306 12:56:06 rpc -- scripts/common.sh@353 -- # local d=2 00:04:03.306 12:56:06 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.306 12:56:06 rpc -- scripts/common.sh@355 -- # echo 2 00:04:03.306 12:56:06 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.307 12:56:06 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.307 12:56:06 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.307 12:56:06 rpc -- scripts/common.sh@368 -- # return 0 00:04:03.307 12:56:06 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.307 12:56:06 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:03.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.307 --rc genhtml_branch_coverage=1 00:04:03.307 --rc genhtml_function_coverage=1 00:04:03.307 --rc genhtml_legend=1 00:04:03.307 --rc geninfo_all_blocks=1 00:04:03.307 --rc geninfo_unexecuted_blocks=1 00:04:03.307 00:04:03.307 ' 00:04:03.307 12:56:06 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:03.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.307 --rc genhtml_branch_coverage=1 00:04:03.307 --rc genhtml_function_coverage=1 00:04:03.307 --rc genhtml_legend=1 00:04:03.307 --rc geninfo_all_blocks=1 00:04:03.307 --rc geninfo_unexecuted_blocks=1 00:04:03.307 00:04:03.307 ' 00:04:03.307 12:56:06 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:03.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.307 --rc genhtml_branch_coverage=1 00:04:03.307 --rc genhtml_function_coverage=1 00:04:03.307 --rc genhtml_legend=1 00:04:03.307 --rc geninfo_all_blocks=1 00:04:03.307 --rc geninfo_unexecuted_blocks=1 00:04:03.307 00:04:03.307 ' 00:04:03.307 12:56:06 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:03.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.307 --rc genhtml_branch_coverage=1 00:04:03.307 --rc genhtml_function_coverage=1 00:04:03.307 --rc genhtml_legend=1 00:04:03.307 --rc geninfo_all_blocks=1 00:04:03.307 --rc geninfo_unexecuted_blocks=1 00:04:03.307 00:04:03.307 ' 00:04:03.307 12:56:06 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2646687 00:04:03.307 12:56:06 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:03.307 12:56:06 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:03.307 12:56:06 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2646687 00:04:03.307 12:56:06 rpc -- common/autotest_common.sh@835 -- # '[' -z 2646687 ']' 00:04:03.307 12:56:06 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.307 12:56:06 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:03.307 12:56:06 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.307 12:56:06 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:03.307 12:56:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.307 [2024-11-19 12:56:06.507835] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:03.307 [2024-11-19 12:56:06.507879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2646687 ] 00:04:03.307 [2024-11-19 12:56:06.570610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.307 [2024-11-19 12:56:06.610301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:03.307 [2024-11-19 12:56:06.610341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2646687' to capture a snapshot of events at runtime. 00:04:03.307 [2024-11-19 12:56:06.610348] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:03.307 [2024-11-19 12:56:06.610355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:03.307 [2024-11-19 12:56:06.610360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2646687 for offline analysis/debug. 00:04:03.307 [2024-11-19 12:56:06.610883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.566 12:56:06 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:03.566 12:56:06 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:03.566 12:56:06 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:03.566 12:56:06 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:03.566 12:56:06 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:03.566 12:56:06 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:03.566 12:56:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.566 12:56:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.566 12:56:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.566 ************************************ 00:04:03.566 START TEST rpc_integrity 00:04:03.566 ************************************ 00:04:03.566 12:56:06 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:03.566 12:56:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:03.566 12:56:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.566 12:56:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.566 12:56:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.566 12:56:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:03.566 12:56:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:03.566 12:56:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:03.566 12:56:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:03.566 12:56:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.566 12:56:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.566 12:56:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.566 12:56:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:03.566 12:56:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:03.566 12:56:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.566 12:56:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.826 12:56:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.826 12:56:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:03.826 { 00:04:03.826 "name": "Malloc0", 00:04:03.826 "aliases": [ 00:04:03.826 "1e5db137-efc3-4fb1-a3a5-7149780a4740" 00:04:03.826 ], 00:04:03.826 "product_name": "Malloc disk", 00:04:03.826 "block_size": 512, 00:04:03.826 "num_blocks": 16384, 00:04:03.826 "uuid": "1e5db137-efc3-4fb1-a3a5-7149780a4740", 00:04:03.826 "assigned_rate_limits": { 00:04:03.826 "rw_ios_per_sec": 0, 00:04:03.826 "rw_mbytes_per_sec": 0, 00:04:03.826 "r_mbytes_per_sec": 0, 00:04:03.826 "w_mbytes_per_sec": 0 00:04:03.826 }, 00:04:03.826 "claimed": false, 00:04:03.826 "zoned": false, 00:04:03.826 "supported_io_types": { 00:04:03.826 "read": true, 00:04:03.826 "write": true, 00:04:03.826 "unmap": true, 00:04:03.826 "flush": true, 00:04:03.826 "reset": true, 00:04:03.826 "nvme_admin": false, 00:04:03.826 "nvme_io": false, 00:04:03.826 "nvme_io_md": false, 00:04:03.826 "write_zeroes": true, 00:04:03.826 "zcopy": true, 00:04:03.826 "get_zone_info": false, 00:04:03.826 "zone_management": false, 00:04:03.826 "zone_append": false, 00:04:03.826 "compare": false, 00:04:03.826 "compare_and_write": false, 00:04:03.826 "abort": true, 00:04:03.826 "seek_hole": false, 00:04:03.826 "seek_data": false, 00:04:03.826 "copy": true, 00:04:03.826 "nvme_iov_md": false 00:04:03.826 }, 00:04:03.826 "memory_domains": [ 00:04:03.826 { 00:04:03.826 "dma_device_id": "system", 00:04:03.826 "dma_device_type": 1 00:04:03.826 }, 00:04:03.826 { 00:04:03.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.826 "dma_device_type": 2 00:04:03.826 } 00:04:03.826 ], 00:04:03.826 "driver_specific": {} 00:04:03.826 } 00:04:03.826 ]' 00:04:03.826 12:56:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:03.826 12:56:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:03.826 12:56:06 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:03.826 12:56:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.826 12:56:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.826 [2024-11-19 12:56:07.003943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:03.826 [2024-11-19 12:56:07.003982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:03.826 [2024-11-19 12:56:07.003995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x84e6e0 00:04:03.826 [2024-11-19 12:56:07.004002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:03.826 [2024-11-19 12:56:07.005123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:03.826 [2024-11-19 12:56:07.005145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:03.826 Passthru0 00:04:03.826 12:56:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.826 12:56:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:03.826 12:56:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.826 12:56:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.826 12:56:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.826 12:56:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:03.826 { 00:04:03.826 "name": "Malloc0", 00:04:03.826 "aliases": [ 00:04:03.826 "1e5db137-efc3-4fb1-a3a5-7149780a4740" 00:04:03.826 ], 00:04:03.826 "product_name": "Malloc disk", 00:04:03.826 "block_size": 512, 00:04:03.826 "num_blocks": 16384, 00:04:03.826 "uuid": "1e5db137-efc3-4fb1-a3a5-7149780a4740", 00:04:03.826 "assigned_rate_limits": { 00:04:03.826 "rw_ios_per_sec": 0, 00:04:03.826 "rw_mbytes_per_sec": 0, 00:04:03.826 "r_mbytes_per_sec": 0, 00:04:03.826 "w_mbytes_per_sec": 0 00:04:03.826 }, 00:04:03.826 "claimed": true, 00:04:03.826 "claim_type": "exclusive_write", 00:04:03.826 "zoned": false, 00:04:03.826 "supported_io_types": { 00:04:03.826 "read": true, 00:04:03.826 "write": true, 00:04:03.826 "unmap": true, 00:04:03.826 "flush": true, 00:04:03.826 "reset": true, 00:04:03.826 "nvme_admin": false, 00:04:03.826 "nvme_io": false, 00:04:03.826 "nvme_io_md": false, 00:04:03.826 "write_zeroes": true, 00:04:03.826 "zcopy": true, 00:04:03.826 "get_zone_info": false, 00:04:03.826 "zone_management": false, 00:04:03.826 "zone_append": false, 00:04:03.826 "compare": false, 00:04:03.826 "compare_and_write": false, 00:04:03.826 "abort": true, 00:04:03.826 "seek_hole": false, 00:04:03.826 "seek_data": false, 00:04:03.826 "copy": true, 00:04:03.826 "nvme_iov_md": false 00:04:03.826 }, 00:04:03.826 "memory_domains": [ 00:04:03.826 { 00:04:03.826 "dma_device_id": "system", 00:04:03.826 "dma_device_type": 1 00:04:03.826 }, 00:04:03.826 { 00:04:03.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.826 "dma_device_type": 2 00:04:03.826 } 00:04:03.826 ], 00:04:03.826 "driver_specific": {} 00:04:03.826 }, 00:04:03.826 { 00:04:03.826 "name": "Passthru0", 00:04:03.826 "aliases": [ 00:04:03.826 "5e59e7f6-8ca4-53b6-9cef-baa5481f8285" 00:04:03.826 ], 00:04:03.826 "product_name": "passthru", 00:04:03.826 "block_size": 512, 00:04:03.826 "num_blocks": 16384, 00:04:03.826 "uuid": "5e59e7f6-8ca4-53b6-9cef-baa5481f8285", 00:04:03.826 "assigned_rate_limits": { 00:04:03.826 "rw_ios_per_sec": 0, 00:04:03.826 "rw_mbytes_per_sec": 0, 00:04:03.826 "r_mbytes_per_sec": 0, 00:04:03.826 "w_mbytes_per_sec": 0 00:04:03.826 }, 00:04:03.826 "claimed": false, 00:04:03.826 "zoned": false, 00:04:03.826 "supported_io_types": { 00:04:03.826 "read": true, 00:04:03.826 "write": true, 00:04:03.826 "unmap": true, 00:04:03.826 "flush": true, 00:04:03.826 "reset": true, 00:04:03.826 "nvme_admin": false, 00:04:03.826 "nvme_io": false, 00:04:03.826 "nvme_io_md": false, 00:04:03.827 "write_zeroes": true, 00:04:03.827 "zcopy": true, 00:04:03.827 "get_zone_info": false, 00:04:03.827 "zone_management": false, 00:04:03.827 "zone_append": false, 00:04:03.827 "compare": false, 00:04:03.827 "compare_and_write": false, 00:04:03.827 "abort": true, 00:04:03.827 "seek_hole": false, 00:04:03.827 "seek_data": false, 00:04:03.827 "copy": true, 00:04:03.827 "nvme_iov_md": false 00:04:03.827 }, 00:04:03.827 "memory_domains": [ 00:04:03.827 { 00:04:03.827 "dma_device_id": "system", 00:04:03.827 "dma_device_type": 1 00:04:03.827 }, 00:04:03.827 { 00:04:03.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.827 "dma_device_type": 2 00:04:03.827 } 00:04:03.827 ], 00:04:03.827 "driver_specific": { 00:04:03.827 "passthru": { 00:04:03.827 "name": "Passthru0", 00:04:03.827 "base_bdev_name": "Malloc0" 00:04:03.827 } 00:04:03.827 } 00:04:03.827 } 00:04:03.827 ]' 00:04:03.827 12:56:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:03.827 12:56:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:03.827 12:56:07 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:03.827 12:56:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.827 12:56:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.827 12:56:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.827 12:56:07 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:03.827 12:56:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.827 12:56:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.827 12:56:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.827 12:56:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:03.827 12:56:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.827 12:56:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.827 12:56:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.827 12:56:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:03.827 12:56:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:03.827 12:56:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:03.827 00:04:03.827 real 0m0.279s 00:04:03.827 user 0m0.170s 00:04:03.827 sys 0m0.045s 00:04:03.827 12:56:07 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.827 12:56:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.827 ************************************ 00:04:03.827 END TEST rpc_integrity 00:04:03.827 ************************************ 00:04:03.827 12:56:07 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:03.827 12:56:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.827 12:56:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.827 12:56:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.086 ************************************ 00:04:04.086 START TEST rpc_plugins 00:04:04.086 ************************************ 00:04:04.086 12:56:07 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:04.086 12:56:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:04.086 12:56:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.086 12:56:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:04.086 12:56:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.086 12:56:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:04.086 12:56:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:04.086 12:56:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.086 12:56:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:04.086 12:56:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.086 12:56:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:04.086 { 00:04:04.086 "name": "Malloc1", 00:04:04.086 "aliases": [ 00:04:04.086 "7c92ce34-521e-45af-99a6-053382fed4a0" 00:04:04.086 ], 00:04:04.086 "product_name": "Malloc disk", 00:04:04.086 "block_size": 4096, 00:04:04.086 "num_blocks": 256, 00:04:04.086 "uuid": "7c92ce34-521e-45af-99a6-053382fed4a0", 00:04:04.086 "assigned_rate_limits": { 00:04:04.086 "rw_ios_per_sec": 0, 00:04:04.086 "rw_mbytes_per_sec": 0, 00:04:04.086 "r_mbytes_per_sec": 0, 00:04:04.086 "w_mbytes_per_sec": 0 00:04:04.086 }, 00:04:04.086 "claimed": false, 00:04:04.086 "zoned": false, 00:04:04.086 "supported_io_types": { 00:04:04.086 "read": true, 00:04:04.086 "write": true, 00:04:04.086 "unmap": true, 00:04:04.086 "flush": true, 00:04:04.086 "reset": true, 00:04:04.086 "nvme_admin": false, 00:04:04.086 "nvme_io": false, 00:04:04.086 "nvme_io_md": false, 00:04:04.086 "write_zeroes": true, 00:04:04.086 "zcopy": true, 00:04:04.086 "get_zone_info": false, 00:04:04.086 "zone_management": false, 00:04:04.086 "zone_append": false, 00:04:04.086 "compare": false, 00:04:04.086 "compare_and_write": false, 00:04:04.086 "abort": true, 00:04:04.086 "seek_hole": false, 00:04:04.086 "seek_data": false, 00:04:04.086 "copy": true, 00:04:04.086 "nvme_iov_md": false 00:04:04.086 }, 00:04:04.086 "memory_domains": [ 00:04:04.086 { 00:04:04.086 "dma_device_id": "system", 00:04:04.086 "dma_device_type": 1 00:04:04.086 }, 00:04:04.086 { 00:04:04.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.087 "dma_device_type": 2 00:04:04.087 } 00:04:04.087 ], 00:04:04.087 "driver_specific": {} 00:04:04.087 } 00:04:04.087 ]' 00:04:04.087 12:56:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:04.087 12:56:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:04.087 12:56:07 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:04.087 12:56:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.087 12:56:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:04.087 12:56:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.087 12:56:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:04.087 12:56:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.087 12:56:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:04.087 12:56:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.087 12:56:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:04.087 12:56:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:04.087 12:56:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:04.087 00:04:04.087 real 0m0.140s 00:04:04.087 user 0m0.087s 00:04:04.087 sys 0m0.017s 00:04:04.087 12:56:07 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.087 12:56:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:04.087 ************************************ 00:04:04.087 END TEST rpc_plugins 00:04:04.087 ************************************ 00:04:04.087 12:56:07 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:04.087 12:56:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.087 12:56:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.087 12:56:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.087 ************************************ 00:04:04.087 START TEST rpc_trace_cmd_test 00:04:04.087 ************************************ 00:04:04.087 12:56:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:04.087 12:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:04.087 12:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:04.087 12:56:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.087 12:56:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:04.087 12:56:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.087 12:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:04.087 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2646687", 00:04:04.087 "tpoint_group_mask": "0x8", 00:04:04.087 "iscsi_conn": { 00:04:04.087 "mask": "0x2", 00:04:04.087 "tpoint_mask": "0x0" 00:04:04.087 }, 00:04:04.087 "scsi": { 00:04:04.087 "mask": "0x4", 00:04:04.087 "tpoint_mask": "0x0" 00:04:04.087 }, 00:04:04.087 "bdev": { 00:04:04.087 "mask": "0x8", 00:04:04.087 "tpoint_mask": "0xffffffffffffffff" 00:04:04.087 }, 00:04:04.087 "nvmf_rdma": { 00:04:04.087 "mask": "0x10", 00:04:04.087 "tpoint_mask": "0x0" 00:04:04.087 }, 00:04:04.087 "nvmf_tcp": { 00:04:04.087 "mask": "0x20", 00:04:04.087 "tpoint_mask": "0x0" 00:04:04.087 }, 00:04:04.087 "ftl": { 00:04:04.087 "mask": "0x40", 00:04:04.087 "tpoint_mask": "0x0" 00:04:04.087 }, 00:04:04.087 "blobfs": { 00:04:04.087 "mask": "0x80", 00:04:04.087 "tpoint_mask": "0x0" 00:04:04.087 }, 00:04:04.087 "dsa": { 00:04:04.087 "mask": "0x200", 00:04:04.087 "tpoint_mask": "0x0" 00:04:04.087 }, 00:04:04.087 "thread": { 00:04:04.087 "mask": "0x400", 00:04:04.087 "tpoint_mask": "0x0" 00:04:04.087 }, 00:04:04.087 "nvme_pcie": { 00:04:04.087 "mask": "0x800", 00:04:04.087 "tpoint_mask": "0x0" 00:04:04.087 }, 00:04:04.087 "iaa": { 00:04:04.087 "mask": "0x1000", 00:04:04.087 "tpoint_mask": "0x0" 00:04:04.087 }, 00:04:04.087 "nvme_tcp": { 00:04:04.087 "mask": "0x2000", 00:04:04.087 "tpoint_mask": "0x0" 00:04:04.087 }, 00:04:04.087 "bdev_nvme": { 00:04:04.087 "mask": "0x4000", 00:04:04.087 "tpoint_mask": "0x0" 00:04:04.087 }, 00:04:04.087 "sock": { 00:04:04.087 "mask": "0x8000", 00:04:04.087 "tpoint_mask": "0x0" 00:04:04.087 }, 00:04:04.087 "blob": { 00:04:04.087 "mask": "0x10000", 00:04:04.087 "tpoint_mask": "0x0" 00:04:04.087 }, 00:04:04.087 "bdev_raid": { 00:04:04.087 "mask": "0x20000", 00:04:04.087 "tpoint_mask": "0x0" 00:04:04.087 }, 00:04:04.087 "scheduler": { 00:04:04.087 "mask": "0x40000", 00:04:04.087 "tpoint_mask": "0x0" 00:04:04.087 } 00:04:04.087 }' 00:04:04.087 12:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:04.346 12:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:04.346 12:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:04.346 12:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:04.346 12:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:04.346 12:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:04.346 12:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:04.346 12:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:04.346 12:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:04.346 12:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:04.346 00:04:04.346 real 0m0.217s 00:04:04.346 user 0m0.181s 00:04:04.346 sys 0m0.027s 00:04:04.346 12:56:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.346 12:56:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:04.346 ************************************ 00:04:04.346 END TEST rpc_trace_cmd_test 00:04:04.346 ************************************ 00:04:04.346 12:56:07 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:04.346 12:56:07 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:04.346 12:56:07 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:04.346 12:56:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.346 12:56:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.346 12:56:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.346 ************************************ 00:04:04.346 START TEST rpc_daemon_integrity 00:04:04.346 ************************************ 00:04:04.346 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:04.346 12:56:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:04.347 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.347 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.347 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.606 12:56:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:04.606 12:56:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:04.606 12:56:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:04.606 12:56:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:04.606 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.606 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.606 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.606 12:56:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:04.606 12:56:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:04.606 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.606 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.606 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.606 12:56:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:04.606 { 00:04:04.606 "name": "Malloc2", 00:04:04.606 "aliases": [ 00:04:04.606 "18075e50-2858-4fc0-a3d8-4d3b05f3a7a0" 00:04:04.606 ], 00:04:04.606 "product_name": "Malloc disk", 00:04:04.606 "block_size": 512, 00:04:04.606 "num_blocks": 16384, 00:04:04.606 "uuid": "18075e50-2858-4fc0-a3d8-4d3b05f3a7a0", 00:04:04.606 "assigned_rate_limits": { 00:04:04.606 "rw_ios_per_sec": 0, 00:04:04.606 "rw_mbytes_per_sec": 0, 00:04:04.606 "r_mbytes_per_sec": 0, 00:04:04.606 "w_mbytes_per_sec": 0 00:04:04.606 }, 00:04:04.606 "claimed": false, 00:04:04.606 "zoned": false, 00:04:04.606 "supported_io_types": { 00:04:04.606 "read": true, 00:04:04.606 "write": true, 00:04:04.606 "unmap": true, 00:04:04.606 "flush": true, 00:04:04.606 "reset": true, 00:04:04.606 "nvme_admin": false, 00:04:04.606 "nvme_io": false, 00:04:04.606 "nvme_io_md": false, 00:04:04.606 "write_zeroes": true, 00:04:04.606 "zcopy": true, 00:04:04.606 "get_zone_info": false, 00:04:04.606 "zone_management": false, 00:04:04.606 "zone_append": false, 00:04:04.606 "compare": false, 00:04:04.606 "compare_and_write": false, 00:04:04.606 "abort": true, 00:04:04.606 "seek_hole": false, 00:04:04.606 "seek_data": false, 00:04:04.606 "copy": true, 00:04:04.606 "nvme_iov_md": false 00:04:04.606 }, 00:04:04.606 "memory_domains": [ 00:04:04.606 { 00:04:04.606 "dma_device_id": "system", 00:04:04.606 "dma_device_type": 1 00:04:04.606 }, 00:04:04.606 { 00:04:04.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.606 "dma_device_type": 2 00:04:04.606 } 00:04:04.606 ], 00:04:04.606 "driver_specific": {} 00:04:04.606 } 00:04:04.606 ]' 00:04:04.606 12:56:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:04.606 12:56:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:04.606 12:56:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:04.606 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.606 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.606 [2024-11-19 12:56:07.846258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:04.606 [2024-11-19 12:56:07.846286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:04.606 [2024-11-19 12:56:07.846298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8deb70 00:04:04.606 [2024-11-19 12:56:07.846304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:04.606 [2024-11-19 12:56:07.847289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:04.606 [2024-11-19 12:56:07.847310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:04.606 Passthru0 00:04:04.606 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.606 12:56:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:04.606 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.606 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.606 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.606 12:56:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:04.606 { 00:04:04.606 "name": "Malloc2", 00:04:04.606 "aliases": [ 00:04:04.606 "18075e50-2858-4fc0-a3d8-4d3b05f3a7a0" 00:04:04.606 ], 00:04:04.606 "product_name": "Malloc disk", 00:04:04.606 "block_size": 512, 00:04:04.606 "num_blocks": 16384, 00:04:04.606 "uuid": "18075e50-2858-4fc0-a3d8-4d3b05f3a7a0", 00:04:04.606 "assigned_rate_limits": { 00:04:04.606 "rw_ios_per_sec": 0, 00:04:04.606 "rw_mbytes_per_sec": 0, 00:04:04.606 "r_mbytes_per_sec": 0, 00:04:04.606 "w_mbytes_per_sec": 0 00:04:04.606 }, 00:04:04.606 "claimed": true, 00:04:04.606 "claim_type": "exclusive_write", 00:04:04.606 "zoned": false, 00:04:04.606 "supported_io_types": { 00:04:04.606 "read": true, 00:04:04.606 "write": true, 00:04:04.606 "unmap": true, 00:04:04.606 "flush": true, 00:04:04.606 "reset": true, 00:04:04.606 "nvme_admin": false, 00:04:04.606 "nvme_io": false, 00:04:04.606 "nvme_io_md": false, 00:04:04.606 "write_zeroes": true, 00:04:04.606 "zcopy": true, 00:04:04.606 "get_zone_info": false, 00:04:04.606 "zone_management": false, 00:04:04.606 "zone_append": false, 00:04:04.606 "compare": false, 00:04:04.606 "compare_and_write": false, 00:04:04.606 "abort": true, 00:04:04.606 "seek_hole": false, 00:04:04.606 "seek_data": false, 00:04:04.606 "copy": true, 00:04:04.606 "nvme_iov_md": false 00:04:04.606 }, 00:04:04.606 "memory_domains": [ 00:04:04.606 { 00:04:04.606 "dma_device_id": "system", 00:04:04.606 "dma_device_type": 1 00:04:04.606 }, 00:04:04.606 { 00:04:04.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.606 "dma_device_type": 2 00:04:04.606 } 00:04:04.606 ], 00:04:04.606 "driver_specific": {} 00:04:04.606 }, 00:04:04.606 { 00:04:04.606 "name": "Passthru0", 00:04:04.606 "aliases": [ 00:04:04.606 "d65b2a95-0401-5aa8-bdf8-60500da3952f" 00:04:04.606 ], 00:04:04.606 "product_name": "passthru", 00:04:04.606 "block_size": 512, 00:04:04.606 "num_blocks": 16384, 00:04:04.606 "uuid": "d65b2a95-0401-5aa8-bdf8-60500da3952f", 00:04:04.606 "assigned_rate_limits": { 00:04:04.606 "rw_ios_per_sec": 0, 00:04:04.606 "rw_mbytes_per_sec": 0, 00:04:04.606 "r_mbytes_per_sec": 0, 00:04:04.606 "w_mbytes_per_sec": 0 00:04:04.606 }, 00:04:04.606 "claimed": false, 00:04:04.606 "zoned": false, 00:04:04.606 "supported_io_types": { 00:04:04.606 "read": true, 00:04:04.606 "write": true, 00:04:04.606 "unmap": true, 00:04:04.606 "flush": true, 00:04:04.606 "reset": true, 00:04:04.606 "nvme_admin": false, 00:04:04.606 "nvme_io": false, 00:04:04.606 "nvme_io_md": false, 00:04:04.606 "write_zeroes": true, 00:04:04.606 "zcopy": true, 00:04:04.606 "get_zone_info": false, 00:04:04.606 "zone_management": false, 00:04:04.606 "zone_append": false, 00:04:04.606 "compare": false, 00:04:04.606 "compare_and_write": false, 00:04:04.606 "abort": true, 00:04:04.606 "seek_hole": false, 00:04:04.606 "seek_data": false, 00:04:04.606 "copy": true, 00:04:04.607 "nvme_iov_md": false 00:04:04.607 }, 00:04:04.607 "memory_domains": [ 00:04:04.607 { 00:04:04.607 "dma_device_id": "system", 00:04:04.607 "dma_device_type": 1 00:04:04.607 }, 00:04:04.607 { 00:04:04.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.607 "dma_device_type": 2 00:04:04.607 } 00:04:04.607 ], 00:04:04.607 "driver_specific": { 00:04:04.607 "passthru": { 00:04:04.607 "name": "Passthru0", 00:04:04.607 "base_bdev_name": "Malloc2" 00:04:04.607 } 00:04:04.607 } 00:04:04.607 } 00:04:04.607 ]' 00:04:04.607 12:56:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:04.607 12:56:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:04.607 12:56:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:04.607 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.607 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.607 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.607 12:56:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:04.607 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.607 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.607 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.607 12:56:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:04.607 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.607 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.607 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.607 12:56:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:04.607 12:56:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:04.866 12:56:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:04.866 00:04:04.866 real 0m0.279s 00:04:04.866 user 0m0.173s 00:04:04.866 sys 0m0.044s 00:04:04.866 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.866 12:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.866 ************************************ 00:04:04.866 END TEST rpc_daemon_integrity 00:04:04.866 ************************************ 00:04:04.866 12:56:08 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:04.866 12:56:08 rpc -- rpc/rpc.sh@84 -- # killprocess 2646687 00:04:04.866 12:56:08 rpc -- common/autotest_common.sh@954 -- # '[' -z 2646687 ']' 00:04:04.866 12:56:08 rpc -- common/autotest_common.sh@958 -- # kill -0 2646687 00:04:04.866 12:56:08 rpc -- common/autotest_common.sh@959 -- # uname 00:04:04.866 12:56:08 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:04.866 12:56:08 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2646687 00:04:04.866 12:56:08 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:04.866 12:56:08 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:04.866 12:56:08 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2646687' 00:04:04.866 killing process with pid 2646687 00:04:04.866 12:56:08 rpc -- common/autotest_common.sh@973 -- # kill 2646687 00:04:04.866 12:56:08 rpc -- common/autotest_common.sh@978 -- # wait 2646687 00:04:05.126 00:04:05.126 real 0m2.095s 00:04:05.126 user 0m2.670s 00:04:05.126 sys 0m0.703s 00:04:05.126 12:56:08 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.126 12:56:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.126 ************************************ 00:04:05.126 END TEST rpc 00:04:05.126 ************************************ 00:04:05.126 12:56:08 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:05.126 12:56:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.126 12:56:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.126 12:56:08 -- common/autotest_common.sh@10 -- # set +x 00:04:05.126 ************************************ 00:04:05.126 START TEST skip_rpc 00:04:05.126 ************************************ 00:04:05.126 12:56:08 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:05.385 * Looking for test storage... 00:04:05.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:05.385 12:56:08 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:05.385 12:56:08 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:05.385 12:56:08 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:05.385 12:56:08 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.385 12:56:08 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:05.385 12:56:08 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.385 12:56:08 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:05.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.385 --rc genhtml_branch_coverage=1 00:04:05.385 --rc genhtml_function_coverage=1 00:04:05.385 --rc genhtml_legend=1 00:04:05.385 --rc geninfo_all_blocks=1 00:04:05.385 --rc geninfo_unexecuted_blocks=1 00:04:05.385 00:04:05.385 ' 00:04:05.385 12:56:08 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:05.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.385 --rc genhtml_branch_coverage=1 00:04:05.385 --rc genhtml_function_coverage=1 00:04:05.385 --rc genhtml_legend=1 00:04:05.385 --rc geninfo_all_blocks=1 00:04:05.385 --rc geninfo_unexecuted_blocks=1 00:04:05.385 00:04:05.385 ' 00:04:05.385 12:56:08 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:05.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.385 --rc genhtml_branch_coverage=1 00:04:05.385 --rc genhtml_function_coverage=1 00:04:05.385 --rc genhtml_legend=1 00:04:05.385 --rc geninfo_all_blocks=1 00:04:05.385 --rc geninfo_unexecuted_blocks=1 00:04:05.385 00:04:05.385 ' 00:04:05.385 12:56:08 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:05.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.385 --rc genhtml_branch_coverage=1 00:04:05.386 --rc genhtml_function_coverage=1 00:04:05.386 --rc genhtml_legend=1 00:04:05.386 --rc geninfo_all_blocks=1 00:04:05.386 --rc geninfo_unexecuted_blocks=1 00:04:05.386 00:04:05.386 ' 00:04:05.386 12:56:08 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:05.386 12:56:08 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:05.386 12:56:08 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:05.386 12:56:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.386 12:56:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.386 12:56:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.386 ************************************ 00:04:05.386 START TEST skip_rpc 00:04:05.386 ************************************ 00:04:05.386 12:56:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:05.386 12:56:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2647327 00:04:05.386 12:56:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:05.386 12:56:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:05.386 12:56:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:05.386 [2024-11-19 12:56:08.708591] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:05.386 [2024-11-19 12:56:08.708632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2647327 ] 00:04:05.645 [2024-11-19 12:56:08.786440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.645 [2024-11-19 12:56:08.828540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2647327 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2647327 ']' 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2647327 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2647327 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2647327' 00:04:10.919 killing process with pid 2647327 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2647327 00:04:10.919 12:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2647327 00:04:10.919 00:04:10.919 real 0m5.372s 00:04:10.919 user 0m5.118s 00:04:10.919 sys 0m0.296s 00:04:10.919 12:56:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.919 12:56:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.919 ************************************ 00:04:10.919 END TEST skip_rpc 00:04:10.919 ************************************ 00:04:10.919 12:56:14 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:10.919 12:56:14 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.919 12:56:14 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.919 12:56:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.919 ************************************ 00:04:10.919 START TEST skip_rpc_with_json 00:04:10.919 ************************************ 00:04:10.919 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:10.919 12:56:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:10.919 12:56:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2648271 00:04:10.919 12:56:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.919 12:56:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:10.919 12:56:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2648271 00:04:10.919 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2648271 ']' 00:04:10.919 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.919 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:10.919 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.919 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:10.919 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:10.919 [2024-11-19 12:56:14.158594] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:10.919 [2024-11-19 12:56:14.158639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2648271 ] 00:04:10.919 [2024-11-19 12:56:14.236020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.919 [2024-11-19 12:56:14.278261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.179 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:11.179 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:11.180 12:56:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:11.180 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.180 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.180 [2024-11-19 12:56:14.496166] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:11.180 request: 00:04:11.180 { 00:04:11.180 "trtype": "tcp", 00:04:11.180 "method": "nvmf_get_transports", 00:04:11.180 "req_id": 1 00:04:11.180 } 00:04:11.180 Got JSON-RPC error response 00:04:11.180 response: 00:04:11.180 { 00:04:11.180 "code": -19, 00:04:11.180 "message": "No such device" 00:04:11.180 } 00:04:11.180 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:11.180 12:56:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:11.180 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.180 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.180 [2024-11-19 12:56:14.508277] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:11.180 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.180 12:56:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:11.180 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.180 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.441 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.441 12:56:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:11.441 { 00:04:11.441 "subsystems": [ 00:04:11.441 { 00:04:11.441 "subsystem": "fsdev", 00:04:11.441 "config": [ 00:04:11.441 { 00:04:11.441 "method": "fsdev_set_opts", 00:04:11.441 "params": { 00:04:11.441 "fsdev_io_pool_size": 65535, 00:04:11.441 "fsdev_io_cache_size": 256 00:04:11.441 } 00:04:11.441 } 00:04:11.441 ] 00:04:11.441 }, 00:04:11.441 { 00:04:11.441 "subsystem": "vfio_user_target", 00:04:11.441 "config": null 00:04:11.441 }, 00:04:11.441 { 00:04:11.441 "subsystem": "keyring", 00:04:11.441 "config": [] 00:04:11.441 }, 00:04:11.441 { 00:04:11.441 "subsystem": "iobuf", 00:04:11.441 "config": [ 00:04:11.441 { 00:04:11.441 "method": "iobuf_set_options", 00:04:11.441 "params": { 00:04:11.441 "small_pool_count": 8192, 00:04:11.441 "large_pool_count": 1024, 00:04:11.441 "small_bufsize": 8192, 00:04:11.441 "large_bufsize": 135168, 00:04:11.441 "enable_numa": false 00:04:11.441 } 00:04:11.441 } 00:04:11.441 ] 00:04:11.441 }, 00:04:11.441 { 00:04:11.441 "subsystem": "sock", 00:04:11.441 "config": [ 00:04:11.441 { 00:04:11.441 "method": "sock_set_default_impl", 00:04:11.441 "params": { 00:04:11.441 "impl_name": "posix" 00:04:11.441 } 00:04:11.441 }, 00:04:11.441 { 00:04:11.441 "method": "sock_impl_set_options", 00:04:11.441 "params": { 00:04:11.441 "impl_name": "ssl", 00:04:11.441 "recv_buf_size": 4096, 00:04:11.441 "send_buf_size": 4096, 00:04:11.441 "enable_recv_pipe": true, 00:04:11.441 "enable_quickack": false, 00:04:11.441 "enable_placement_id": 0, 00:04:11.441 "enable_zerocopy_send_server": true, 00:04:11.441 "enable_zerocopy_send_client": false, 00:04:11.441 "zerocopy_threshold": 0, 00:04:11.441 "tls_version": 0, 00:04:11.441 "enable_ktls": false 00:04:11.441 } 00:04:11.441 }, 00:04:11.441 { 00:04:11.441 "method": "sock_impl_set_options", 00:04:11.441 "params": { 00:04:11.441 "impl_name": "posix", 00:04:11.441 "recv_buf_size": 2097152, 00:04:11.441 "send_buf_size": 2097152, 00:04:11.441 "enable_recv_pipe": true, 00:04:11.441 "enable_quickack": false, 00:04:11.441 "enable_placement_id": 0, 00:04:11.441 "enable_zerocopy_send_server": true, 00:04:11.441 "enable_zerocopy_send_client": false, 00:04:11.441 "zerocopy_threshold": 0, 00:04:11.441 "tls_version": 0, 00:04:11.441 "enable_ktls": false 00:04:11.441 } 00:04:11.441 } 00:04:11.441 ] 00:04:11.441 }, 00:04:11.441 { 00:04:11.441 "subsystem": "vmd", 00:04:11.441 "config": [] 00:04:11.441 }, 00:04:11.441 { 00:04:11.441 "subsystem": "accel", 00:04:11.441 "config": [ 00:04:11.441 { 00:04:11.441 "method": "accel_set_options", 00:04:11.441 "params": { 00:04:11.441 "small_cache_size": 128, 00:04:11.441 "large_cache_size": 16, 00:04:11.441 "task_count": 2048, 00:04:11.441 "sequence_count": 2048, 00:04:11.441 "buf_count": 2048 00:04:11.441 } 00:04:11.441 } 00:04:11.441 ] 00:04:11.441 }, 00:04:11.441 { 00:04:11.441 "subsystem": "bdev", 00:04:11.441 "config": [ 00:04:11.441 { 00:04:11.441 "method": "bdev_set_options", 00:04:11.441 "params": { 00:04:11.441 "bdev_io_pool_size": 65535, 00:04:11.441 "bdev_io_cache_size": 256, 00:04:11.441 "bdev_auto_examine": true, 00:04:11.441 "iobuf_small_cache_size": 128, 00:04:11.441 "iobuf_large_cache_size": 16 00:04:11.441 } 00:04:11.441 }, 00:04:11.441 { 00:04:11.441 "method": "bdev_raid_set_options", 00:04:11.441 "params": { 00:04:11.442 "process_window_size_kb": 1024, 00:04:11.442 "process_max_bandwidth_mb_sec": 0 00:04:11.442 } 00:04:11.442 }, 00:04:11.442 { 00:04:11.442 "method": "bdev_iscsi_set_options", 00:04:11.442 "params": { 00:04:11.442 "timeout_sec": 30 00:04:11.442 } 00:04:11.442 }, 00:04:11.442 { 00:04:11.442 "method": "bdev_nvme_set_options", 00:04:11.442 "params": { 00:04:11.442 "action_on_timeout": "none", 00:04:11.442 "timeout_us": 0, 00:04:11.442 "timeout_admin_us": 0, 00:04:11.442 "keep_alive_timeout_ms": 10000, 00:04:11.442 "arbitration_burst": 0, 00:04:11.442 "low_priority_weight": 0, 00:04:11.442 "medium_priority_weight": 0, 00:04:11.442 "high_priority_weight": 0, 00:04:11.442 "nvme_adminq_poll_period_us": 10000, 00:04:11.442 "nvme_ioq_poll_period_us": 0, 00:04:11.442 "io_queue_requests": 0, 00:04:11.442 "delay_cmd_submit": true, 00:04:11.442 "transport_retry_count": 4, 00:04:11.442 "bdev_retry_count": 3, 00:04:11.442 "transport_ack_timeout": 0, 00:04:11.442 "ctrlr_loss_timeout_sec": 0, 00:04:11.442 "reconnect_delay_sec": 0, 00:04:11.442 "fast_io_fail_timeout_sec": 0, 00:04:11.442 "disable_auto_failback": false, 00:04:11.442 "generate_uuids": false, 00:04:11.442 "transport_tos": 0, 00:04:11.442 "nvme_error_stat": false, 00:04:11.442 "rdma_srq_size": 0, 00:04:11.442 "io_path_stat": false, 00:04:11.442 "allow_accel_sequence": false, 00:04:11.442 "rdma_max_cq_size": 0, 00:04:11.442 "rdma_cm_event_timeout_ms": 0, 00:04:11.442 "dhchap_digests": [ 00:04:11.442 "sha256", 00:04:11.442 "sha384", 00:04:11.442 "sha512" 00:04:11.442 ], 00:04:11.442 "dhchap_dhgroups": [ 00:04:11.442 "null", 00:04:11.442 "ffdhe2048", 00:04:11.442 "ffdhe3072", 00:04:11.442 "ffdhe4096", 00:04:11.442 "ffdhe6144", 00:04:11.442 "ffdhe8192" 00:04:11.442 ] 00:04:11.442 } 00:04:11.442 }, 00:04:11.442 { 00:04:11.442 "method": "bdev_nvme_set_hotplug", 00:04:11.442 "params": { 00:04:11.442 "period_us": 100000, 00:04:11.442 "enable": false 00:04:11.442 } 00:04:11.442 }, 00:04:11.442 { 00:04:11.442 "method": "bdev_wait_for_examine" 00:04:11.442 } 00:04:11.442 ] 00:04:11.442 }, 00:04:11.442 { 00:04:11.442 "subsystem": "scsi", 00:04:11.442 "config": null 00:04:11.442 }, 00:04:11.442 { 00:04:11.442 "subsystem": "scheduler", 00:04:11.442 "config": [ 00:04:11.442 { 00:04:11.442 "method": "framework_set_scheduler", 00:04:11.442 "params": { 00:04:11.442 "name": "static" 00:04:11.442 } 00:04:11.442 } 00:04:11.442 ] 00:04:11.442 }, 00:04:11.442 { 00:04:11.442 "subsystem": "vhost_scsi", 00:04:11.442 "config": [] 00:04:11.442 }, 00:04:11.442 { 00:04:11.442 "subsystem": "vhost_blk", 00:04:11.442 "config": [] 00:04:11.442 }, 00:04:11.442 { 00:04:11.442 "subsystem": "ublk", 00:04:11.442 "config": [] 00:04:11.442 }, 00:04:11.442 { 00:04:11.442 "subsystem": "nbd", 00:04:11.442 "config": [] 00:04:11.442 }, 00:04:11.442 { 00:04:11.442 "subsystem": "nvmf", 00:04:11.442 "config": [ 00:04:11.442 { 00:04:11.442 "method": "nvmf_set_config", 00:04:11.442 "params": { 00:04:11.442 "discovery_filter": "match_any", 00:04:11.442 "admin_cmd_passthru": { 00:04:11.442 "identify_ctrlr": false 00:04:11.442 }, 00:04:11.442 "dhchap_digests": [ 00:04:11.442 "sha256", 00:04:11.442 "sha384", 00:04:11.442 "sha512" 00:04:11.442 ], 00:04:11.442 "dhchap_dhgroups": [ 00:04:11.442 "null", 00:04:11.442 "ffdhe2048", 00:04:11.442 "ffdhe3072", 00:04:11.442 "ffdhe4096", 00:04:11.442 "ffdhe6144", 00:04:11.442 "ffdhe8192" 00:04:11.442 ] 00:04:11.442 } 00:04:11.442 }, 00:04:11.442 { 00:04:11.442 "method": "nvmf_set_max_subsystems", 00:04:11.442 "params": { 00:04:11.442 "max_subsystems": 1024 00:04:11.442 } 00:04:11.442 }, 00:04:11.442 { 00:04:11.442 "method": "nvmf_set_crdt", 00:04:11.442 "params": { 00:04:11.442 "crdt1": 0, 00:04:11.442 "crdt2": 0, 00:04:11.442 "crdt3": 0 00:04:11.442 } 00:04:11.442 }, 00:04:11.442 { 00:04:11.442 "method": "nvmf_create_transport", 00:04:11.442 "params": { 00:04:11.442 "trtype": "TCP", 00:04:11.442 "max_queue_depth": 128, 00:04:11.442 "max_io_qpairs_per_ctrlr": 127, 00:04:11.442 "in_capsule_data_size": 4096, 00:04:11.442 "max_io_size": 131072, 00:04:11.442 "io_unit_size": 131072, 00:04:11.442 "max_aq_depth": 128, 00:04:11.442 "num_shared_buffers": 511, 00:04:11.442 "buf_cache_size": 4294967295, 00:04:11.442 "dif_insert_or_strip": false, 00:04:11.442 "zcopy": false, 00:04:11.442 "c2h_success": true, 00:04:11.442 "sock_priority": 0, 00:04:11.442 "abort_timeout_sec": 1, 00:04:11.442 "ack_timeout": 0, 00:04:11.442 "data_wr_pool_size": 0 00:04:11.442 } 00:04:11.442 } 00:04:11.442 ] 00:04:11.442 }, 00:04:11.442 { 00:04:11.442 "subsystem": "iscsi", 00:04:11.442 "config": [ 00:04:11.442 { 00:04:11.442 "method": "iscsi_set_options", 00:04:11.442 "params": { 00:04:11.442 "node_base": "iqn.2016-06.io.spdk", 00:04:11.442 "max_sessions": 128, 00:04:11.442 "max_connections_per_session": 2, 00:04:11.442 "max_queue_depth": 64, 00:04:11.442 "default_time2wait": 2, 00:04:11.442 "default_time2retain": 20, 00:04:11.442 "first_burst_length": 8192, 00:04:11.442 "immediate_data": true, 00:04:11.442 "allow_duplicated_isid": false, 00:04:11.442 "error_recovery_level": 0, 00:04:11.442 "nop_timeout": 60, 00:04:11.442 "nop_in_interval": 30, 00:04:11.442 "disable_chap": false, 00:04:11.442 "require_chap": false, 00:04:11.442 "mutual_chap": false, 00:04:11.442 "chap_group": 0, 00:04:11.442 "max_large_datain_per_connection": 64, 00:04:11.442 "max_r2t_per_connection": 4, 00:04:11.442 "pdu_pool_size": 36864, 00:04:11.442 "immediate_data_pool_size": 16384, 00:04:11.442 "data_out_pool_size": 2048 00:04:11.442 } 00:04:11.442 } 00:04:11.442 ] 00:04:11.442 } 00:04:11.442 ] 00:04:11.442 } 00:04:11.442 12:56:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:11.442 12:56:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2648271 00:04:11.442 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2648271 ']' 00:04:11.442 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2648271 00:04:11.442 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:11.442 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:11.442 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2648271 00:04:11.442 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:11.442 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:11.442 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2648271' 00:04:11.442 killing process with pid 2648271 00:04:11.442 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2648271 00:04:11.442 12:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2648271 00:04:11.702 12:56:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2648356 00:04:11.702 12:56:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:11.702 12:56:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:16.977 12:56:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2648356 00:04:16.977 12:56:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2648356 ']' 00:04:16.977 12:56:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2648356 00:04:16.977 12:56:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:16.977 12:56:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.977 12:56:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2648356 00:04:16.977 12:56:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:16.977 12:56:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:16.977 12:56:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2648356' 00:04:16.977 killing process with pid 2648356 00:04:16.977 12:56:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2648356 00:04:16.977 12:56:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2648356 00:04:17.238 12:56:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:17.238 12:56:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:17.238 00:04:17.238 real 0m6.303s 00:04:17.238 user 0m5.977s 00:04:17.238 sys 0m0.630s 00:04:17.238 12:56:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.238 12:56:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:17.238 ************************************ 00:04:17.238 END TEST skip_rpc_with_json 00:04:17.238 ************************************ 00:04:17.238 12:56:20 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:17.238 12:56:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.238 12:56:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.238 12:56:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.238 ************************************ 00:04:17.238 START TEST skip_rpc_with_delay 00:04:17.238 ************************************ 00:04:17.238 12:56:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:17.238 12:56:20 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:17.238 12:56:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:17.238 12:56:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:17.238 12:56:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:17.238 12:56:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.238 12:56:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:17.238 12:56:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.238 12:56:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:17.238 12:56:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.238 12:56:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:17.238 12:56:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:17.238 12:56:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:17.238 [2024-11-19 12:56:20.539103] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:17.238 12:56:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:17.238 12:56:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:17.238 12:56:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:17.238 12:56:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:17.238 00:04:17.238 real 0m0.072s 00:04:17.238 user 0m0.043s 00:04:17.238 sys 0m0.028s 00:04:17.238 12:56:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.238 12:56:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:17.238 ************************************ 00:04:17.238 END TEST skip_rpc_with_delay 00:04:17.238 ************************************ 00:04:17.238 12:56:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:17.238 12:56:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:17.238 12:56:20 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:17.238 12:56:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.238 12:56:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.238 12:56:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.498 ************************************ 00:04:17.498 START TEST exit_on_failed_rpc_init 00:04:17.498 ************************************ 00:04:17.498 12:56:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:17.498 12:56:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2649380 00:04:17.498 12:56:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:17.498 12:56:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2649380 00:04:17.498 12:56:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2649380 ']' 00:04:17.498 12:56:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.498 12:56:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:17.498 12:56:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.498 12:56:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:17.498 12:56:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:17.498 [2024-11-19 12:56:20.679265] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:17.498 [2024-11-19 12:56:20.679311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2649380 ] 00:04:17.498 [2024-11-19 12:56:20.756040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.498 [2024-11-19 12:56:20.799437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.757 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:17.757 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:17.757 12:56:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.757 12:56:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:17.757 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:17.757 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:17.757 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:17.757 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.757 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:17.757 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.757 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:17.757 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.757 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:17.757 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:17.757 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:17.757 [2024-11-19 12:56:21.075363] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:17.757 [2024-11-19 12:56:21.075410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2649493 ] 00:04:18.017 [2024-11-19 12:56:21.150800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.017 [2024-11-19 12:56:21.191491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.017 [2024-11-19 12:56:21.191544] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:18.017 [2024-11-19 12:56:21.191554] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:18.017 [2024-11-19 12:56:21.191563] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:18.017 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:18.017 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:18.017 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:18.017 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:18.017 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:18.017 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:18.017 12:56:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:18.017 12:56:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2649380 00:04:18.017 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2649380 ']' 00:04:18.017 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2649380 00:04:18.017 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:18.017 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:18.017 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2649380 00:04:18.017 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:18.017 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:18.017 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2649380' 00:04:18.017 killing process with pid 2649380 00:04:18.017 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2649380 00:04:18.017 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2649380 00:04:18.276 00:04:18.276 real 0m0.943s 00:04:18.276 user 0m0.985s 00:04:18.276 sys 0m0.397s 00:04:18.276 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.276 12:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:18.276 ************************************ 00:04:18.276 END TEST exit_on_failed_rpc_init 00:04:18.277 ************************************ 00:04:18.277 12:56:21 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:18.277 00:04:18.277 real 0m13.162s 00:04:18.277 user 0m12.326s 00:04:18.277 sys 0m1.650s 00:04:18.277 12:56:21 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.277 12:56:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.277 ************************************ 00:04:18.277 END TEST skip_rpc 00:04:18.277 ************************************ 00:04:18.277 12:56:21 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:18.277 12:56:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.277 12:56:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.277 12:56:21 -- common/autotest_common.sh@10 -- # set +x 00:04:18.536 ************************************ 00:04:18.536 START TEST rpc_client 00:04:18.536 ************************************ 00:04:18.536 12:56:21 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:18.536 * Looking for test storage... 00:04:18.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:18.536 12:56:21 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:18.536 12:56:21 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:18.536 12:56:21 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:18.536 12:56:21 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.536 12:56:21 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:18.536 12:56:21 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.536 12:56:21 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:18.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.536 --rc genhtml_branch_coverage=1 00:04:18.536 --rc genhtml_function_coverage=1 00:04:18.536 --rc genhtml_legend=1 00:04:18.536 --rc geninfo_all_blocks=1 00:04:18.536 --rc geninfo_unexecuted_blocks=1 00:04:18.536 00:04:18.536 ' 00:04:18.536 12:56:21 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:18.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.537 --rc genhtml_branch_coverage=1 00:04:18.537 --rc genhtml_function_coverage=1 00:04:18.537 --rc genhtml_legend=1 00:04:18.537 --rc geninfo_all_blocks=1 00:04:18.537 --rc geninfo_unexecuted_blocks=1 00:04:18.537 00:04:18.537 ' 00:04:18.537 12:56:21 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:18.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.537 --rc genhtml_branch_coverage=1 00:04:18.537 --rc genhtml_function_coverage=1 00:04:18.537 --rc genhtml_legend=1 00:04:18.537 --rc geninfo_all_blocks=1 00:04:18.537 --rc geninfo_unexecuted_blocks=1 00:04:18.537 00:04:18.537 ' 00:04:18.537 12:56:21 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:18.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.537 --rc genhtml_branch_coverage=1 00:04:18.537 --rc genhtml_function_coverage=1 00:04:18.537 --rc genhtml_legend=1 00:04:18.537 --rc geninfo_all_blocks=1 00:04:18.537 --rc geninfo_unexecuted_blocks=1 00:04:18.537 00:04:18.537 ' 00:04:18.537 12:56:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:18.537 OK 00:04:18.537 12:56:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:18.537 00:04:18.537 real 0m0.198s 00:04:18.537 user 0m0.121s 00:04:18.537 sys 0m0.092s 00:04:18.537 12:56:21 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.537 12:56:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:18.537 ************************************ 00:04:18.537 END TEST rpc_client 00:04:18.537 ************************************ 00:04:18.537 12:56:21 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:18.537 12:56:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.797 12:56:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.797 12:56:21 -- common/autotest_common.sh@10 -- # set +x 00:04:18.797 ************************************ 00:04:18.797 START TEST json_config 00:04:18.797 ************************************ 00:04:18.797 12:56:21 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:18.797 12:56:22 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:18.797 12:56:22 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:18.797 12:56:22 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:18.797 12:56:22 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:18.797 12:56:22 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.797 12:56:22 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.797 12:56:22 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.797 12:56:22 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.797 12:56:22 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.797 12:56:22 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.797 12:56:22 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.797 12:56:22 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.797 12:56:22 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.797 12:56:22 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.797 12:56:22 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.797 12:56:22 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:18.797 12:56:22 json_config -- scripts/common.sh@345 -- # : 1 00:04:18.797 12:56:22 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.797 12:56:22 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.797 12:56:22 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:18.797 12:56:22 json_config -- scripts/common.sh@353 -- # local d=1 00:04:18.797 12:56:22 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.797 12:56:22 json_config -- scripts/common.sh@355 -- # echo 1 00:04:18.797 12:56:22 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.797 12:56:22 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:18.797 12:56:22 json_config -- scripts/common.sh@353 -- # local d=2 00:04:18.797 12:56:22 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.797 12:56:22 json_config -- scripts/common.sh@355 -- # echo 2 00:04:18.797 12:56:22 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.797 12:56:22 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.797 12:56:22 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.797 12:56:22 json_config -- scripts/common.sh@368 -- # return 0 00:04:18.797 12:56:22 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.797 12:56:22 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:18.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.797 --rc genhtml_branch_coverage=1 00:04:18.797 --rc genhtml_function_coverage=1 00:04:18.797 --rc genhtml_legend=1 00:04:18.797 --rc geninfo_all_blocks=1 00:04:18.797 --rc geninfo_unexecuted_blocks=1 00:04:18.797 00:04:18.797 ' 00:04:18.797 12:56:22 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:18.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.797 --rc genhtml_branch_coverage=1 00:04:18.797 --rc genhtml_function_coverage=1 00:04:18.797 --rc genhtml_legend=1 00:04:18.797 --rc geninfo_all_blocks=1 00:04:18.797 --rc geninfo_unexecuted_blocks=1 00:04:18.797 00:04:18.797 ' 00:04:18.797 12:56:22 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:18.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.797 --rc genhtml_branch_coverage=1 00:04:18.797 --rc genhtml_function_coverage=1 00:04:18.797 --rc genhtml_legend=1 00:04:18.797 --rc geninfo_all_blocks=1 00:04:18.797 --rc geninfo_unexecuted_blocks=1 00:04:18.797 00:04:18.797 ' 00:04:18.797 12:56:22 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:18.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.798 --rc genhtml_branch_coverage=1 00:04:18.798 --rc genhtml_function_coverage=1 00:04:18.798 --rc genhtml_legend=1 00:04:18.798 --rc geninfo_all_blocks=1 00:04:18.798 --rc geninfo_unexecuted_blocks=1 00:04:18.798 00:04:18.798 ' 00:04:18.798 12:56:22 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:18.798 12:56:22 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:18.798 12:56:22 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:18.798 12:56:22 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:18.798 12:56:22 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:18.798 12:56:22 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.798 12:56:22 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.798 12:56:22 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.798 12:56:22 json_config -- paths/export.sh@5 -- # export PATH 00:04:18.798 12:56:22 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@51 -- # : 0 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:18.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:18.798 12:56:22 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:18.798 12:56:22 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:18.798 12:56:22 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:18.798 12:56:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:18.798 12:56:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:18.798 12:56:22 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:18.798 12:56:22 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:18.798 12:56:22 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:18.798 12:56:22 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:18.798 12:56:22 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:18.798 12:56:22 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:18.798 12:56:22 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:18.798 12:56:22 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:18.798 12:56:22 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:18.798 12:56:22 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:18.798 12:56:22 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:18.798 12:56:22 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:18.798 INFO: JSON configuration test init 00:04:18.798 12:56:22 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:18.798 12:56:22 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:18.798 12:56:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:18.798 12:56:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.798 12:56:22 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:18.798 12:56:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:18.798 12:56:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.798 12:56:22 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:18.798 12:56:22 json_config -- json_config/common.sh@9 -- # local app=target 00:04:18.798 12:56:22 json_config -- json_config/common.sh@10 -- # shift 00:04:18.798 12:56:22 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:18.798 12:56:22 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:18.798 12:56:22 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:18.798 12:56:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:18.798 12:56:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:18.798 12:56:22 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2649830 00:04:18.798 12:56:22 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:18.798 Waiting for target to run... 00:04:18.798 12:56:22 json_config -- json_config/common.sh@25 -- # waitforlisten 2649830 /var/tmp/spdk_tgt.sock 00:04:18.798 12:56:22 json_config -- common/autotest_common.sh@835 -- # '[' -z 2649830 ']' 00:04:18.798 12:56:22 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:18.798 12:56:22 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:18.798 12:56:22 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.798 12:56:22 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:18.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:18.798 12:56:22 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.798 12:56:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.058 [2024-11-19 12:56:22.199887] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:19.058 [2024-11-19 12:56:22.199939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2649830 ] 00:04:19.317 [2024-11-19 12:56:22.492392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.317 [2024-11-19 12:56:22.526343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.885 12:56:23 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:19.885 12:56:23 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:19.885 12:56:23 json_config -- json_config/common.sh@26 -- # echo '' 00:04:19.885 00:04:19.885 12:56:23 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:19.885 12:56:23 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:19.885 12:56:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.885 12:56:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.885 12:56:23 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:19.885 12:56:23 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:19.885 12:56:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:19.885 12:56:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.885 12:56:23 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:19.885 12:56:23 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:19.885 12:56:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:23.174 12:56:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.174 12:56:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:23.174 12:56:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@54 -- # sort 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:23.174 12:56:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:23.174 12:56:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:23.174 12:56:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.174 12:56:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:23.174 12:56:26 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:23.174 12:56:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:23.432 MallocForNvmf0 00:04:23.432 12:56:26 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:23.432 12:56:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:23.433 MallocForNvmf1 00:04:23.691 12:56:26 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:23.691 12:56:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:23.691 [2024-11-19 12:56:26.991285] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:23.691 12:56:27 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:23.691 12:56:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:23.950 12:56:27 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:23.950 12:56:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:24.209 12:56:27 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:24.209 12:56:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:24.467 12:56:27 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:24.467 12:56:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:24.467 [2024-11-19 12:56:27.801807] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:24.467 12:56:27 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:24.467 12:56:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:24.467 12:56:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.726 12:56:27 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:24.726 12:56:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:24.726 12:56:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.726 12:56:27 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:24.726 12:56:27 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:24.726 12:56:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:24.726 MallocBdevForConfigChangeCheck 00:04:24.726 12:56:28 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:24.726 12:56:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:24.726 12:56:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.985 12:56:28 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:24.985 12:56:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:25.244 12:56:28 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:25.244 INFO: shutting down applications... 00:04:25.244 12:56:28 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:25.244 12:56:28 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:25.244 12:56:28 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:25.244 12:56:28 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:27.147 Calling clear_iscsi_subsystem 00:04:27.147 Calling clear_nvmf_subsystem 00:04:27.147 Calling clear_nbd_subsystem 00:04:27.147 Calling clear_ublk_subsystem 00:04:27.147 Calling clear_vhost_blk_subsystem 00:04:27.147 Calling clear_vhost_scsi_subsystem 00:04:27.147 Calling clear_bdev_subsystem 00:04:27.147 12:56:30 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:27.147 12:56:30 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:27.147 12:56:30 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:27.147 12:56:30 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:27.147 12:56:30 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:27.147 12:56:30 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:27.147 12:56:30 json_config -- json_config/json_config.sh@352 -- # break 00:04:27.147 12:56:30 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:27.147 12:56:30 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:27.147 12:56:30 json_config -- json_config/common.sh@31 -- # local app=target 00:04:27.147 12:56:30 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:27.147 12:56:30 json_config -- json_config/common.sh@35 -- # [[ -n 2649830 ]] 00:04:27.147 12:56:30 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2649830 00:04:27.147 12:56:30 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:27.147 12:56:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:27.147 12:56:30 json_config -- json_config/common.sh@41 -- # kill -0 2649830 00:04:27.147 12:56:30 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:27.715 12:56:30 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:27.715 12:56:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:27.715 12:56:30 json_config -- json_config/common.sh@41 -- # kill -0 2649830 00:04:27.715 12:56:30 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:27.715 12:56:30 json_config -- json_config/common.sh@43 -- # break 00:04:27.715 12:56:30 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:27.715 12:56:30 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:27.715 SPDK target shutdown done 00:04:27.715 12:56:30 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:27.715 INFO: relaunching applications... 00:04:27.715 12:56:30 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:27.715 12:56:30 json_config -- json_config/common.sh@9 -- # local app=target 00:04:27.715 12:56:30 json_config -- json_config/common.sh@10 -- # shift 00:04:27.715 12:56:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:27.715 12:56:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:27.715 12:56:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:27.715 12:56:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.715 12:56:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.715 12:56:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2651361 00:04:27.715 12:56:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:27.715 Waiting for target to run... 00:04:27.715 12:56:30 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:27.715 12:56:30 json_config -- json_config/common.sh@25 -- # waitforlisten 2651361 /var/tmp/spdk_tgt.sock 00:04:27.715 12:56:30 json_config -- common/autotest_common.sh@835 -- # '[' -z 2651361 ']' 00:04:27.715 12:56:30 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:27.715 12:56:30 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.715 12:56:30 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:27.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:27.715 12:56:30 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.715 12:56:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.715 [2024-11-19 12:56:30.990244] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:27.715 [2024-11-19 12:56:30.990303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2651361 ] 00:04:28.283 [2024-11-19 12:56:31.450777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.283 [2024-11-19 12:56:31.506454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.574 [2024-11-19 12:56:34.541012] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:31.574 [2024-11-19 12:56:34.573369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:31.834 12:56:35 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.834 12:56:35 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:31.834 12:56:35 json_config -- json_config/common.sh@26 -- # echo '' 00:04:31.834 00:04:31.834 12:56:35 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:31.834 12:56:35 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:31.834 INFO: Checking if target configuration is the same... 00:04:31.834 12:56:35 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:31.834 12:56:35 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.834 12:56:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.834 + '[' 2 -ne 2 ']' 00:04:32.094 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:32.094 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:32.094 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:32.094 +++ basename /dev/fd/62 00:04:32.094 ++ mktemp /tmp/62.XXX 00:04:32.094 + tmp_file_1=/tmp/62.axh 00:04:32.094 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:32.094 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:32.094 + tmp_file_2=/tmp/spdk_tgt_config.json.i8I 00:04:32.094 + ret=0 00:04:32.094 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:32.355 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:32.355 + diff -u /tmp/62.axh /tmp/spdk_tgt_config.json.i8I 00:04:32.355 + echo 'INFO: JSON config files are the same' 00:04:32.355 INFO: JSON config files are the same 00:04:32.355 + rm /tmp/62.axh /tmp/spdk_tgt_config.json.i8I 00:04:32.355 + exit 0 00:04:32.355 12:56:35 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:32.355 12:56:35 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:32.355 INFO: changing configuration and checking if this can be detected... 00:04:32.355 12:56:35 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:32.355 12:56:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:32.615 12:56:35 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:32.615 12:56:35 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:32.615 12:56:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:32.615 + '[' 2 -ne 2 ']' 00:04:32.615 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:32.615 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:32.615 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:32.615 +++ basename /dev/fd/62 00:04:32.615 ++ mktemp /tmp/62.XXX 00:04:32.615 + tmp_file_1=/tmp/62.Sxh 00:04:32.615 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:32.615 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:32.615 + tmp_file_2=/tmp/spdk_tgt_config.json.zLG 00:04:32.615 + ret=0 00:04:32.615 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:32.874 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:32.874 + diff -u /tmp/62.Sxh /tmp/spdk_tgt_config.json.zLG 00:04:32.874 + ret=1 00:04:32.874 + echo '=== Start of file: /tmp/62.Sxh ===' 00:04:32.874 + cat /tmp/62.Sxh 00:04:32.874 + echo '=== End of file: /tmp/62.Sxh ===' 00:04:32.874 + echo '' 00:04:32.874 + echo '=== Start of file: /tmp/spdk_tgt_config.json.zLG ===' 00:04:32.874 + cat /tmp/spdk_tgt_config.json.zLG 00:04:32.874 + echo '=== End of file: /tmp/spdk_tgt_config.json.zLG ===' 00:04:32.874 + echo '' 00:04:32.874 + rm /tmp/62.Sxh /tmp/spdk_tgt_config.json.zLG 00:04:32.874 + exit 1 00:04:32.874 12:56:36 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:32.874 INFO: configuration change detected. 00:04:32.874 12:56:36 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:32.874 12:56:36 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:32.874 12:56:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:32.874 12:56:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.874 12:56:36 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:32.874 12:56:36 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:32.874 12:56:36 json_config -- json_config/json_config.sh@324 -- # [[ -n 2651361 ]] 00:04:32.874 12:56:36 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:32.874 12:56:36 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:32.874 12:56:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:32.874 12:56:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.874 12:56:36 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:32.874 12:56:36 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:32.874 12:56:36 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:32.874 12:56:36 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:33.134 12:56:36 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:33.134 12:56:36 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:33.134 12:56:36 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:33.134 12:56:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.134 12:56:36 json_config -- json_config/json_config.sh@330 -- # killprocess 2651361 00:04:33.134 12:56:36 json_config -- common/autotest_common.sh@954 -- # '[' -z 2651361 ']' 00:04:33.134 12:56:36 json_config -- common/autotest_common.sh@958 -- # kill -0 2651361 00:04:33.134 12:56:36 json_config -- common/autotest_common.sh@959 -- # uname 00:04:33.134 12:56:36 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.134 12:56:36 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2651361 00:04:33.134 12:56:36 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.134 12:56:36 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.134 12:56:36 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2651361' 00:04:33.134 killing process with pid 2651361 00:04:33.134 12:56:36 json_config -- common/autotest_common.sh@973 -- # kill 2651361 00:04:33.134 12:56:36 json_config -- common/autotest_common.sh@978 -- # wait 2651361 00:04:34.515 12:56:37 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.515 12:56:37 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:34.515 12:56:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:34.515 12:56:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.515 12:56:37 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:34.515 12:56:37 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:34.515 INFO: Success 00:04:34.515 00:04:34.515 real 0m15.896s 00:04:34.515 user 0m16.572s 00:04:34.515 sys 0m2.567s 00:04:34.515 12:56:37 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.516 12:56:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.516 ************************************ 00:04:34.516 END TEST json_config 00:04:34.516 ************************************ 00:04:34.516 12:56:37 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:34.516 12:56:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.516 12:56:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.516 12:56:37 -- common/autotest_common.sh@10 -- # set +x 00:04:34.776 ************************************ 00:04:34.776 START TEST json_config_extra_key 00:04:34.776 ************************************ 00:04:34.776 12:56:37 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:34.776 12:56:37 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:34.776 12:56:37 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:34.776 12:56:37 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:34.776 12:56:38 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:34.776 12:56:38 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.776 12:56:38 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:34.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.776 --rc genhtml_branch_coverage=1 00:04:34.776 --rc genhtml_function_coverage=1 00:04:34.776 --rc genhtml_legend=1 00:04:34.776 --rc geninfo_all_blocks=1 00:04:34.776 --rc geninfo_unexecuted_blocks=1 00:04:34.776 00:04:34.776 ' 00:04:34.776 12:56:38 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:34.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.776 --rc genhtml_branch_coverage=1 00:04:34.776 --rc genhtml_function_coverage=1 00:04:34.776 --rc genhtml_legend=1 00:04:34.776 --rc geninfo_all_blocks=1 00:04:34.776 --rc geninfo_unexecuted_blocks=1 00:04:34.776 00:04:34.776 ' 00:04:34.776 12:56:38 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:34.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.776 --rc genhtml_branch_coverage=1 00:04:34.776 --rc genhtml_function_coverage=1 00:04:34.776 --rc genhtml_legend=1 00:04:34.776 --rc geninfo_all_blocks=1 00:04:34.776 --rc geninfo_unexecuted_blocks=1 00:04:34.776 00:04:34.776 ' 00:04:34.776 12:56:38 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:34.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.776 --rc genhtml_branch_coverage=1 00:04:34.776 --rc genhtml_function_coverage=1 00:04:34.776 --rc genhtml_legend=1 00:04:34.776 --rc geninfo_all_blocks=1 00:04:34.776 --rc geninfo_unexecuted_blocks=1 00:04:34.776 00:04:34.776 ' 00:04:34.776 12:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:34.776 12:56:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:34.776 12:56:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:34.776 12:56:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:34.776 12:56:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:34.776 12:56:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:34.776 12:56:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:34.776 12:56:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:34.776 12:56:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:34.776 12:56:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:34.776 12:56:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:34.776 12:56:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:34.776 12:56:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:34.776 12:56:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:34.776 12:56:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:34.776 12:56:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:34.776 12:56:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:34.776 12:56:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:34.776 12:56:38 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:34.776 12:56:38 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:34.776 12:56:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.776 12:56:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.776 12:56:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.776 12:56:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:34.777 12:56:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.777 12:56:38 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:34.777 12:56:38 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:34.777 12:56:38 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:34.777 12:56:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:34.777 12:56:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:34.777 12:56:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:34.777 12:56:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:34.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:34.777 12:56:38 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:34.777 12:56:38 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:34.777 12:56:38 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:34.777 12:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:34.777 12:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:34.777 12:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:34.777 12:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:34.777 12:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:34.777 12:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:34.777 12:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:34.777 12:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:34.777 12:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:34.777 12:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:34.777 12:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:34.777 INFO: launching applications... 00:04:34.777 12:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:34.777 12:56:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:34.777 12:56:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:34.777 12:56:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:34.777 12:56:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:34.777 12:56:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:34.777 12:56:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:34.777 12:56:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:34.777 12:56:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2652644 00:04:34.777 12:56:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:34.777 Waiting for target to run... 00:04:34.777 12:56:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2652644 /var/tmp/spdk_tgt.sock 00:04:34.777 12:56:38 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2652644 ']' 00:04:34.777 12:56:38 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:34.777 12:56:38 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:34.777 12:56:38 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.777 12:56:38 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:34.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:34.777 12:56:38 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.777 12:56:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:34.777 [2024-11-19 12:56:38.149658] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:34.777 [2024-11-19 12:56:38.149710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2652644 ] 00:04:35.347 [2024-11-19 12:56:38.605723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.347 [2024-11-19 12:56:38.658692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.607 12:56:38 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.607 12:56:38 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:35.607 12:56:38 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:35.607 00:04:35.607 12:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:35.607 INFO: shutting down applications... 00:04:35.607 12:56:38 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:35.607 12:56:38 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:35.607 12:56:38 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:35.607 12:56:38 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2652644 ]] 00:04:35.607 12:56:38 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2652644 00:04:35.607 12:56:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:35.867 12:56:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:35.867 12:56:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2652644 00:04:35.867 12:56:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:36.127 12:56:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:36.127 12:56:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:36.127 12:56:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2652644 00:04:36.127 12:56:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:36.127 12:56:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:36.127 12:56:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:36.127 12:56:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:36.127 SPDK target shutdown done 00:04:36.127 12:56:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:36.127 Success 00:04:36.127 00:04:36.127 real 0m1.578s 00:04:36.127 user 0m1.203s 00:04:36.127 sys 0m0.568s 00:04:36.127 12:56:39 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.127 12:56:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:36.127 ************************************ 00:04:36.127 END TEST json_config_extra_key 00:04:36.127 ************************************ 00:04:36.387 12:56:39 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:36.387 12:56:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.387 12:56:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.387 12:56:39 -- common/autotest_common.sh@10 -- # set +x 00:04:36.387 ************************************ 00:04:36.387 START TEST alias_rpc 00:04:36.387 ************************************ 00:04:36.387 12:56:39 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:36.387 * Looking for test storage... 00:04:36.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:36.387 12:56:39 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:36.387 12:56:39 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:36.387 12:56:39 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:36.387 12:56:39 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.387 12:56:39 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:36.387 12:56:39 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.387 12:56:39 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:36.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.387 --rc genhtml_branch_coverage=1 00:04:36.387 --rc genhtml_function_coverage=1 00:04:36.387 --rc genhtml_legend=1 00:04:36.387 --rc geninfo_all_blocks=1 00:04:36.387 --rc geninfo_unexecuted_blocks=1 00:04:36.387 00:04:36.387 ' 00:04:36.387 12:56:39 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:36.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.387 --rc genhtml_branch_coverage=1 00:04:36.387 --rc genhtml_function_coverage=1 00:04:36.387 --rc genhtml_legend=1 00:04:36.387 --rc geninfo_all_blocks=1 00:04:36.387 --rc geninfo_unexecuted_blocks=1 00:04:36.387 00:04:36.387 ' 00:04:36.387 12:56:39 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:36.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.387 --rc genhtml_branch_coverage=1 00:04:36.387 --rc genhtml_function_coverage=1 00:04:36.387 --rc genhtml_legend=1 00:04:36.387 --rc geninfo_all_blocks=1 00:04:36.387 --rc geninfo_unexecuted_blocks=1 00:04:36.387 00:04:36.387 ' 00:04:36.387 12:56:39 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:36.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.387 --rc genhtml_branch_coverage=1 00:04:36.387 --rc genhtml_function_coverage=1 00:04:36.387 --rc genhtml_legend=1 00:04:36.387 --rc geninfo_all_blocks=1 00:04:36.387 --rc geninfo_unexecuted_blocks=1 00:04:36.387 00:04:36.387 ' 00:04:36.387 12:56:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:36.387 12:56:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2652962 00:04:36.387 12:56:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2652962 00:04:36.387 12:56:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.387 12:56:39 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2652962 ']' 00:04:36.387 12:56:39 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.387 12:56:39 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.387 12:56:39 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.387 12:56:39 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.387 12:56:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.726 [2024-11-19 12:56:39.792682] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:36.726 [2024-11-19 12:56:39.792747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2652962 ] 00:04:36.726 [2024-11-19 12:56:39.867854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.726 [2024-11-19 12:56:39.908186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.079 12:56:40 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.079 12:56:40 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:37.079 12:56:40 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:37.079 12:56:40 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2652962 00:04:37.079 12:56:40 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2652962 ']' 00:04:37.079 12:56:40 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2652962 00:04:37.079 12:56:40 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:37.079 12:56:40 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.079 12:56:40 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2652962 00:04:37.079 12:56:40 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.079 12:56:40 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.079 12:56:40 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2652962' 00:04:37.079 killing process with pid 2652962 00:04:37.079 12:56:40 alias_rpc -- common/autotest_common.sh@973 -- # kill 2652962 00:04:37.079 12:56:40 alias_rpc -- common/autotest_common.sh@978 -- # wait 2652962 00:04:37.390 00:04:37.390 real 0m1.144s 00:04:37.390 user 0m1.160s 00:04:37.390 sys 0m0.426s 00:04:37.390 12:56:40 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.390 12:56:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.390 ************************************ 00:04:37.390 END TEST alias_rpc 00:04:37.390 ************************************ 00:04:37.390 12:56:40 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:37.390 12:56:40 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:37.390 12:56:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.390 12:56:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.390 12:56:40 -- common/autotest_common.sh@10 -- # set +x 00:04:37.651 ************************************ 00:04:37.651 START TEST spdkcli_tcp 00:04:37.651 ************************************ 00:04:37.651 12:56:40 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:37.651 * Looking for test storage... 00:04:37.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:37.651 12:56:40 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:37.651 12:56:40 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:37.651 12:56:40 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:37.651 12:56:40 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.651 12:56:40 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:37.651 12:56:40 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.651 12:56:40 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:37.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.651 --rc genhtml_branch_coverage=1 00:04:37.651 --rc genhtml_function_coverage=1 00:04:37.651 --rc genhtml_legend=1 00:04:37.651 --rc geninfo_all_blocks=1 00:04:37.651 --rc geninfo_unexecuted_blocks=1 00:04:37.651 00:04:37.651 ' 00:04:37.651 12:56:40 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:37.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.651 --rc genhtml_branch_coverage=1 00:04:37.651 --rc genhtml_function_coverage=1 00:04:37.651 --rc genhtml_legend=1 00:04:37.651 --rc geninfo_all_blocks=1 00:04:37.651 --rc geninfo_unexecuted_blocks=1 00:04:37.651 00:04:37.651 ' 00:04:37.651 12:56:40 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:37.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.651 --rc genhtml_branch_coverage=1 00:04:37.651 --rc genhtml_function_coverage=1 00:04:37.651 --rc genhtml_legend=1 00:04:37.651 --rc geninfo_all_blocks=1 00:04:37.651 --rc geninfo_unexecuted_blocks=1 00:04:37.651 00:04:37.651 ' 00:04:37.651 12:56:40 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:37.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.651 --rc genhtml_branch_coverage=1 00:04:37.651 --rc genhtml_function_coverage=1 00:04:37.651 --rc genhtml_legend=1 00:04:37.651 --rc geninfo_all_blocks=1 00:04:37.651 --rc geninfo_unexecuted_blocks=1 00:04:37.651 00:04:37.651 ' 00:04:37.651 12:56:40 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:37.651 12:56:40 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:37.651 12:56:40 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:37.651 12:56:40 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:37.651 12:56:40 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:37.651 12:56:40 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:37.651 12:56:40 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:37.651 12:56:40 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.652 12:56:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.652 12:56:40 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2653229 00:04:37.652 12:56:40 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2653229 00:04:37.652 12:56:40 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:37.652 12:56:40 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2653229 ']' 00:04:37.652 12:56:40 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.652 12:56:40 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.652 12:56:40 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.652 12:56:40 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.652 12:56:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.652 [2024-11-19 12:56:41.010104] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:37.652 [2024-11-19 12:56:41.010149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2653229 ] 00:04:37.910 [2024-11-19 12:56:41.086683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:37.910 [2024-11-19 12:56:41.128585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.910 [2024-11-19 12:56:41.128586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.170 12:56:41 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.170 12:56:41 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:38.170 12:56:41 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2653413 00:04:38.170 12:56:41 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:38.170 12:56:41 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:38.170 [ 00:04:38.170 "bdev_malloc_delete", 00:04:38.170 "bdev_malloc_create", 00:04:38.170 "bdev_null_resize", 00:04:38.170 "bdev_null_delete", 00:04:38.170 "bdev_null_create", 00:04:38.170 "bdev_nvme_cuse_unregister", 00:04:38.170 "bdev_nvme_cuse_register", 00:04:38.170 "bdev_opal_new_user", 00:04:38.170 "bdev_opal_set_lock_state", 00:04:38.170 "bdev_opal_delete", 00:04:38.170 "bdev_opal_get_info", 00:04:38.170 "bdev_opal_create", 00:04:38.170 "bdev_nvme_opal_revert", 00:04:38.170 "bdev_nvme_opal_init", 00:04:38.170 "bdev_nvme_send_cmd", 00:04:38.170 "bdev_nvme_set_keys", 00:04:38.170 "bdev_nvme_get_path_iostat", 00:04:38.170 "bdev_nvme_get_mdns_discovery_info", 00:04:38.170 "bdev_nvme_stop_mdns_discovery", 00:04:38.170 "bdev_nvme_start_mdns_discovery", 00:04:38.170 "bdev_nvme_set_multipath_policy", 00:04:38.170 "bdev_nvme_set_preferred_path", 00:04:38.170 "bdev_nvme_get_io_paths", 00:04:38.170 "bdev_nvme_remove_error_injection", 00:04:38.170 "bdev_nvme_add_error_injection", 00:04:38.170 "bdev_nvme_get_discovery_info", 00:04:38.170 "bdev_nvme_stop_discovery", 00:04:38.170 "bdev_nvme_start_discovery", 00:04:38.170 "bdev_nvme_get_controller_health_info", 00:04:38.170 "bdev_nvme_disable_controller", 00:04:38.170 "bdev_nvme_enable_controller", 00:04:38.170 "bdev_nvme_reset_controller", 00:04:38.170 "bdev_nvme_get_transport_statistics", 00:04:38.170 "bdev_nvme_apply_firmware", 00:04:38.170 "bdev_nvme_detach_controller", 00:04:38.170 "bdev_nvme_get_controllers", 00:04:38.170 "bdev_nvme_attach_controller", 00:04:38.170 "bdev_nvme_set_hotplug", 00:04:38.170 "bdev_nvme_set_options", 00:04:38.170 "bdev_passthru_delete", 00:04:38.170 "bdev_passthru_create", 00:04:38.170 "bdev_lvol_set_parent_bdev", 00:04:38.170 "bdev_lvol_set_parent", 00:04:38.170 "bdev_lvol_check_shallow_copy", 00:04:38.170 "bdev_lvol_start_shallow_copy", 00:04:38.170 "bdev_lvol_grow_lvstore", 00:04:38.170 "bdev_lvol_get_lvols", 00:04:38.170 "bdev_lvol_get_lvstores", 00:04:38.170 "bdev_lvol_delete", 00:04:38.170 "bdev_lvol_set_read_only", 00:04:38.170 "bdev_lvol_resize", 00:04:38.170 "bdev_lvol_decouple_parent", 00:04:38.170 "bdev_lvol_inflate", 00:04:38.170 "bdev_lvol_rename", 00:04:38.170 "bdev_lvol_clone_bdev", 00:04:38.170 "bdev_lvol_clone", 00:04:38.170 "bdev_lvol_snapshot", 00:04:38.170 "bdev_lvol_create", 00:04:38.170 "bdev_lvol_delete_lvstore", 00:04:38.170 "bdev_lvol_rename_lvstore", 00:04:38.170 "bdev_lvol_create_lvstore", 00:04:38.170 "bdev_raid_set_options", 00:04:38.170 "bdev_raid_remove_base_bdev", 00:04:38.170 "bdev_raid_add_base_bdev", 00:04:38.170 "bdev_raid_delete", 00:04:38.170 "bdev_raid_create", 00:04:38.170 "bdev_raid_get_bdevs", 00:04:38.170 "bdev_error_inject_error", 00:04:38.170 "bdev_error_delete", 00:04:38.170 "bdev_error_create", 00:04:38.170 "bdev_split_delete", 00:04:38.170 "bdev_split_create", 00:04:38.170 "bdev_delay_delete", 00:04:38.170 "bdev_delay_create", 00:04:38.170 "bdev_delay_update_latency", 00:04:38.170 "bdev_zone_block_delete", 00:04:38.170 "bdev_zone_block_create", 00:04:38.170 "blobfs_create", 00:04:38.170 "blobfs_detect", 00:04:38.170 "blobfs_set_cache_size", 00:04:38.170 "bdev_aio_delete", 00:04:38.170 "bdev_aio_rescan", 00:04:38.170 "bdev_aio_create", 00:04:38.170 "bdev_ftl_set_property", 00:04:38.170 "bdev_ftl_get_properties", 00:04:38.170 "bdev_ftl_get_stats", 00:04:38.170 "bdev_ftl_unmap", 00:04:38.170 "bdev_ftl_unload", 00:04:38.170 "bdev_ftl_delete", 00:04:38.170 "bdev_ftl_load", 00:04:38.170 "bdev_ftl_create", 00:04:38.170 "bdev_virtio_attach_controller", 00:04:38.170 "bdev_virtio_scsi_get_devices", 00:04:38.170 "bdev_virtio_detach_controller", 00:04:38.170 "bdev_virtio_blk_set_hotplug", 00:04:38.170 "bdev_iscsi_delete", 00:04:38.170 "bdev_iscsi_create", 00:04:38.170 "bdev_iscsi_set_options", 00:04:38.170 "accel_error_inject_error", 00:04:38.170 "ioat_scan_accel_module", 00:04:38.170 "dsa_scan_accel_module", 00:04:38.170 "iaa_scan_accel_module", 00:04:38.170 "vfu_virtio_create_fs_endpoint", 00:04:38.170 "vfu_virtio_create_scsi_endpoint", 00:04:38.170 "vfu_virtio_scsi_remove_target", 00:04:38.170 "vfu_virtio_scsi_add_target", 00:04:38.170 "vfu_virtio_create_blk_endpoint", 00:04:38.170 "vfu_virtio_delete_endpoint", 00:04:38.170 "keyring_file_remove_key", 00:04:38.170 "keyring_file_add_key", 00:04:38.170 "keyring_linux_set_options", 00:04:38.170 "fsdev_aio_delete", 00:04:38.170 "fsdev_aio_create", 00:04:38.170 "iscsi_get_histogram", 00:04:38.170 "iscsi_enable_histogram", 00:04:38.170 "iscsi_set_options", 00:04:38.170 "iscsi_get_auth_groups", 00:04:38.170 "iscsi_auth_group_remove_secret", 00:04:38.170 "iscsi_auth_group_add_secret", 00:04:38.170 "iscsi_delete_auth_group", 00:04:38.170 "iscsi_create_auth_group", 00:04:38.170 "iscsi_set_discovery_auth", 00:04:38.170 "iscsi_get_options", 00:04:38.170 "iscsi_target_node_request_logout", 00:04:38.170 "iscsi_target_node_set_redirect", 00:04:38.170 "iscsi_target_node_set_auth", 00:04:38.170 "iscsi_target_node_add_lun", 00:04:38.170 "iscsi_get_stats", 00:04:38.170 "iscsi_get_connections", 00:04:38.170 "iscsi_portal_group_set_auth", 00:04:38.170 "iscsi_start_portal_group", 00:04:38.170 "iscsi_delete_portal_group", 00:04:38.170 "iscsi_create_portal_group", 00:04:38.170 "iscsi_get_portal_groups", 00:04:38.170 "iscsi_delete_target_node", 00:04:38.170 "iscsi_target_node_remove_pg_ig_maps", 00:04:38.170 "iscsi_target_node_add_pg_ig_maps", 00:04:38.170 "iscsi_create_target_node", 00:04:38.170 "iscsi_get_target_nodes", 00:04:38.170 "iscsi_delete_initiator_group", 00:04:38.170 "iscsi_initiator_group_remove_initiators", 00:04:38.170 "iscsi_initiator_group_add_initiators", 00:04:38.170 "iscsi_create_initiator_group", 00:04:38.170 "iscsi_get_initiator_groups", 00:04:38.170 "nvmf_set_crdt", 00:04:38.170 "nvmf_set_config", 00:04:38.170 "nvmf_set_max_subsystems", 00:04:38.170 "nvmf_stop_mdns_prr", 00:04:38.170 "nvmf_publish_mdns_prr", 00:04:38.170 "nvmf_subsystem_get_listeners", 00:04:38.170 "nvmf_subsystem_get_qpairs", 00:04:38.170 "nvmf_subsystem_get_controllers", 00:04:38.170 "nvmf_get_stats", 00:04:38.170 "nvmf_get_transports", 00:04:38.170 "nvmf_create_transport", 00:04:38.170 "nvmf_get_targets", 00:04:38.170 "nvmf_delete_target", 00:04:38.170 "nvmf_create_target", 00:04:38.170 "nvmf_subsystem_allow_any_host", 00:04:38.170 "nvmf_subsystem_set_keys", 00:04:38.170 "nvmf_subsystem_remove_host", 00:04:38.170 "nvmf_subsystem_add_host", 00:04:38.170 "nvmf_ns_remove_host", 00:04:38.170 "nvmf_ns_add_host", 00:04:38.170 "nvmf_subsystem_remove_ns", 00:04:38.170 "nvmf_subsystem_set_ns_ana_group", 00:04:38.170 "nvmf_subsystem_add_ns", 00:04:38.170 "nvmf_subsystem_listener_set_ana_state", 00:04:38.170 "nvmf_discovery_get_referrals", 00:04:38.170 "nvmf_discovery_remove_referral", 00:04:38.170 "nvmf_discovery_add_referral", 00:04:38.170 "nvmf_subsystem_remove_listener", 00:04:38.170 "nvmf_subsystem_add_listener", 00:04:38.170 "nvmf_delete_subsystem", 00:04:38.170 "nvmf_create_subsystem", 00:04:38.170 "nvmf_get_subsystems", 00:04:38.170 "env_dpdk_get_mem_stats", 00:04:38.170 "nbd_get_disks", 00:04:38.170 "nbd_stop_disk", 00:04:38.170 "nbd_start_disk", 00:04:38.170 "ublk_recover_disk", 00:04:38.170 "ublk_get_disks", 00:04:38.170 "ublk_stop_disk", 00:04:38.170 "ublk_start_disk", 00:04:38.170 "ublk_destroy_target", 00:04:38.170 "ublk_create_target", 00:04:38.170 "virtio_blk_create_transport", 00:04:38.170 "virtio_blk_get_transports", 00:04:38.170 "vhost_controller_set_coalescing", 00:04:38.170 "vhost_get_controllers", 00:04:38.170 "vhost_delete_controller", 00:04:38.170 "vhost_create_blk_controller", 00:04:38.170 "vhost_scsi_controller_remove_target", 00:04:38.170 "vhost_scsi_controller_add_target", 00:04:38.170 "vhost_start_scsi_controller", 00:04:38.170 "vhost_create_scsi_controller", 00:04:38.170 "thread_set_cpumask", 00:04:38.170 "scheduler_set_options", 00:04:38.170 "framework_get_governor", 00:04:38.170 "framework_get_scheduler", 00:04:38.170 "framework_set_scheduler", 00:04:38.170 "framework_get_reactors", 00:04:38.170 "thread_get_io_channels", 00:04:38.171 "thread_get_pollers", 00:04:38.171 "thread_get_stats", 00:04:38.171 "framework_monitor_context_switch", 00:04:38.171 "spdk_kill_instance", 00:04:38.171 "log_enable_timestamps", 00:04:38.171 "log_get_flags", 00:04:38.171 "log_clear_flag", 00:04:38.171 "log_set_flag", 00:04:38.171 "log_get_level", 00:04:38.171 "log_set_level", 00:04:38.171 "log_get_print_level", 00:04:38.171 "log_set_print_level", 00:04:38.171 "framework_enable_cpumask_locks", 00:04:38.171 "framework_disable_cpumask_locks", 00:04:38.171 "framework_wait_init", 00:04:38.171 "framework_start_init", 00:04:38.171 "scsi_get_devices", 00:04:38.171 "bdev_get_histogram", 00:04:38.171 "bdev_enable_histogram", 00:04:38.171 "bdev_set_qos_limit", 00:04:38.171 "bdev_set_qd_sampling_period", 00:04:38.171 "bdev_get_bdevs", 00:04:38.171 "bdev_reset_iostat", 00:04:38.171 "bdev_get_iostat", 00:04:38.171 "bdev_examine", 00:04:38.171 "bdev_wait_for_examine", 00:04:38.171 "bdev_set_options", 00:04:38.171 "accel_get_stats", 00:04:38.171 "accel_set_options", 00:04:38.171 "accel_set_driver", 00:04:38.171 "accel_crypto_key_destroy", 00:04:38.171 "accel_crypto_keys_get", 00:04:38.171 "accel_crypto_key_create", 00:04:38.171 "accel_assign_opc", 00:04:38.171 "accel_get_module_info", 00:04:38.171 "accel_get_opc_assignments", 00:04:38.171 "vmd_rescan", 00:04:38.171 "vmd_remove_device", 00:04:38.171 "vmd_enable", 00:04:38.171 "sock_get_default_impl", 00:04:38.171 "sock_set_default_impl", 00:04:38.171 "sock_impl_set_options", 00:04:38.171 "sock_impl_get_options", 00:04:38.171 "iobuf_get_stats", 00:04:38.171 "iobuf_set_options", 00:04:38.171 "keyring_get_keys", 00:04:38.171 "vfu_tgt_set_base_path", 00:04:38.171 "framework_get_pci_devices", 00:04:38.171 "framework_get_config", 00:04:38.171 "framework_get_subsystems", 00:04:38.171 "fsdev_set_opts", 00:04:38.171 "fsdev_get_opts", 00:04:38.171 "trace_get_info", 00:04:38.171 "trace_get_tpoint_group_mask", 00:04:38.171 "trace_disable_tpoint_group", 00:04:38.171 "trace_enable_tpoint_group", 00:04:38.171 "trace_clear_tpoint_mask", 00:04:38.171 "trace_set_tpoint_mask", 00:04:38.171 "notify_get_notifications", 00:04:38.171 "notify_get_types", 00:04:38.171 "spdk_get_version", 00:04:38.171 "rpc_get_methods" 00:04:38.171 ] 00:04:38.430 12:56:41 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:38.430 12:56:41 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:38.430 12:56:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:38.430 12:56:41 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:38.430 12:56:41 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2653229 00:04:38.430 12:56:41 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2653229 ']' 00:04:38.430 12:56:41 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2653229 00:04:38.430 12:56:41 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:38.430 12:56:41 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.430 12:56:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2653229 00:04:38.430 12:56:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.430 12:56:41 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.430 12:56:41 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2653229' 00:04:38.430 killing process with pid 2653229 00:04:38.430 12:56:41 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2653229 00:04:38.430 12:56:41 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2653229 00:04:38.689 00:04:38.689 real 0m1.166s 00:04:38.689 user 0m1.974s 00:04:38.689 sys 0m0.446s 00:04:38.689 12:56:41 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.689 12:56:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:38.689 ************************************ 00:04:38.689 END TEST spdkcli_tcp 00:04:38.689 ************************************ 00:04:38.689 12:56:41 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:38.689 12:56:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.689 12:56:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.689 12:56:41 -- common/autotest_common.sh@10 -- # set +x 00:04:38.689 ************************************ 00:04:38.689 START TEST dpdk_mem_utility 00:04:38.689 ************************************ 00:04:38.689 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:38.949 * Looking for test storage... 00:04:38.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:38.949 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:38.949 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:38.949 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:38.949 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.949 12:56:42 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:38.949 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.949 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:38.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.949 --rc genhtml_branch_coverage=1 00:04:38.949 --rc genhtml_function_coverage=1 00:04:38.949 --rc genhtml_legend=1 00:04:38.949 --rc geninfo_all_blocks=1 00:04:38.949 --rc geninfo_unexecuted_blocks=1 00:04:38.949 00:04:38.949 ' 00:04:38.949 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:38.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.949 --rc genhtml_branch_coverage=1 00:04:38.949 --rc genhtml_function_coverage=1 00:04:38.949 --rc genhtml_legend=1 00:04:38.949 --rc geninfo_all_blocks=1 00:04:38.949 --rc geninfo_unexecuted_blocks=1 00:04:38.949 00:04:38.949 ' 00:04:38.949 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:38.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.949 --rc genhtml_branch_coverage=1 00:04:38.949 --rc genhtml_function_coverage=1 00:04:38.949 --rc genhtml_legend=1 00:04:38.949 --rc geninfo_all_blocks=1 00:04:38.949 --rc geninfo_unexecuted_blocks=1 00:04:38.949 00:04:38.949 ' 00:04:38.949 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:38.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.949 --rc genhtml_branch_coverage=1 00:04:38.949 --rc genhtml_function_coverage=1 00:04:38.949 --rc genhtml_legend=1 00:04:38.949 --rc geninfo_all_blocks=1 00:04:38.949 --rc geninfo_unexecuted_blocks=1 00:04:38.949 00:04:38.949 ' 00:04:38.949 12:56:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:38.949 12:56:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2653533 00:04:38.949 12:56:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2653533 00:04:38.949 12:56:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.949 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2653533 ']' 00:04:38.949 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.949 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.949 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.949 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.949 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:38.949 [2024-11-19 12:56:42.234438] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:38.949 [2024-11-19 12:56:42.234488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2653533 ] 00:04:38.949 [2024-11-19 12:56:42.309829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.208 [2024-11-19 12:56:42.353876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.208 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.208 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:39.208 12:56:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:39.208 12:56:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:39.208 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.208 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:39.468 { 00:04:39.468 "filename": "/tmp/spdk_mem_dump.txt" 00:04:39.468 } 00:04:39.468 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.468 12:56:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:39.468 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:39.468 1 heaps totaling size 810.000000 MiB 00:04:39.468 size: 810.000000 MiB heap id: 0 00:04:39.468 end heaps---------- 00:04:39.468 9 mempools totaling size 595.772034 MiB 00:04:39.468 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:39.468 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:39.468 size: 92.545471 MiB name: bdev_io_2653533 00:04:39.468 size: 50.003479 MiB name: msgpool_2653533 00:04:39.468 size: 36.509338 MiB name: fsdev_io_2653533 00:04:39.468 size: 21.763794 MiB name: PDU_Pool 00:04:39.468 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:39.468 size: 4.133484 MiB name: evtpool_2653533 00:04:39.468 size: 0.026123 MiB name: Session_Pool 00:04:39.468 end mempools------- 00:04:39.468 6 memzones totaling size 4.142822 MiB 00:04:39.468 size: 1.000366 MiB name: RG_ring_0_2653533 00:04:39.468 size: 1.000366 MiB name: RG_ring_1_2653533 00:04:39.468 size: 1.000366 MiB name: RG_ring_4_2653533 00:04:39.468 size: 1.000366 MiB name: RG_ring_5_2653533 00:04:39.468 size: 0.125366 MiB name: RG_ring_2_2653533 00:04:39.468 size: 0.015991 MiB name: RG_ring_3_2653533 00:04:39.468 end memzones------- 00:04:39.468 12:56:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:39.468 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:39.468 list of free elements. size: 10.862488 MiB 00:04:39.468 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:39.468 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:39.468 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:39.468 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:39.468 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:39.468 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:39.468 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:39.468 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:39.468 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:39.468 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:39.468 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:39.468 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:39.468 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:39.468 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:39.468 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:39.468 list of standard malloc elements. size: 199.218628 MiB 00:04:39.468 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:39.468 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:39.468 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:39.468 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:39.468 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:39.468 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:39.468 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:39.468 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:39.468 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:39.468 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:39.468 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:39.468 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:39.468 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:39.468 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:39.468 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:39.468 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:39.468 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:39.468 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:39.468 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:39.468 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:39.468 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:39.468 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:39.468 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:39.468 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:39.468 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:39.468 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:39.468 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:39.468 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:39.468 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:39.468 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:39.469 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:39.469 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:39.469 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:39.469 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:39.469 list of memzone associated elements. size: 599.918884 MiB 00:04:39.469 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:39.469 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:39.469 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:39.469 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:39.469 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:39.469 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2653533_0 00:04:39.469 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:39.469 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2653533_0 00:04:39.469 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:39.469 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2653533_0 00:04:39.469 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:39.469 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:39.469 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:39.469 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:39.469 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:39.469 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2653533_0 00:04:39.469 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:39.469 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2653533 00:04:39.469 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:39.469 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2653533 00:04:39.469 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:39.469 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:39.469 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:39.469 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:39.469 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:39.469 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:39.469 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:39.469 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:39.469 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:39.469 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2653533 00:04:39.469 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:39.469 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2653533 00:04:39.469 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:39.469 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2653533 00:04:39.469 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:39.469 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2653533 00:04:39.469 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:39.469 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2653533 00:04:39.469 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:39.469 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2653533 00:04:39.469 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:39.469 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:39.469 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:39.469 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:39.469 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:39.469 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:39.469 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:39.469 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2653533 00:04:39.469 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:39.469 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2653533 00:04:39.469 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:39.469 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:39.469 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:39.469 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:39.469 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:39.469 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2653533 00:04:39.469 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:39.469 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:39.469 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:39.469 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2653533 00:04:39.469 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:39.469 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2653533 00:04:39.469 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:39.469 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2653533 00:04:39.469 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:39.469 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:39.469 12:56:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:39.469 12:56:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2653533 00:04:39.469 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2653533 ']' 00:04:39.469 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2653533 00:04:39.469 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:39.469 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.469 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2653533 00:04:39.469 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.469 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.469 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2653533' 00:04:39.469 killing process with pid 2653533 00:04:39.469 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2653533 00:04:39.469 12:56:42 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2653533 00:04:39.729 00:04:39.729 real 0m1.027s 00:04:39.729 user 0m0.986s 00:04:39.729 sys 0m0.398s 00:04:39.729 12:56:43 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.729 12:56:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:39.729 ************************************ 00:04:39.729 END TEST dpdk_mem_utility 00:04:39.729 ************************************ 00:04:39.729 12:56:43 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:39.729 12:56:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.729 12:56:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.729 12:56:43 -- common/autotest_common.sh@10 -- # set +x 00:04:39.729 ************************************ 00:04:39.729 START TEST event 00:04:39.729 ************************************ 00:04:39.988 12:56:43 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:39.988 * Looking for test storage... 00:04:39.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:39.988 12:56:43 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.988 12:56:43 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.988 12:56:43 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.988 12:56:43 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.988 12:56:43 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.988 12:56:43 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.988 12:56:43 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.988 12:56:43 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.988 12:56:43 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.988 12:56:43 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.988 12:56:43 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.988 12:56:43 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.988 12:56:43 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.988 12:56:43 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.988 12:56:43 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.988 12:56:43 event -- scripts/common.sh@344 -- # case "$op" in 00:04:39.988 12:56:43 event -- scripts/common.sh@345 -- # : 1 00:04:39.988 12:56:43 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.988 12:56:43 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.988 12:56:43 event -- scripts/common.sh@365 -- # decimal 1 00:04:39.988 12:56:43 event -- scripts/common.sh@353 -- # local d=1 00:04:39.988 12:56:43 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.988 12:56:43 event -- scripts/common.sh@355 -- # echo 1 00:04:39.988 12:56:43 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.988 12:56:43 event -- scripts/common.sh@366 -- # decimal 2 00:04:39.989 12:56:43 event -- scripts/common.sh@353 -- # local d=2 00:04:39.989 12:56:43 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.989 12:56:43 event -- scripts/common.sh@355 -- # echo 2 00:04:39.989 12:56:43 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.989 12:56:43 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.989 12:56:43 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.989 12:56:43 event -- scripts/common.sh@368 -- # return 0 00:04:39.989 12:56:43 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.989 12:56:43 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.989 --rc genhtml_branch_coverage=1 00:04:39.989 --rc genhtml_function_coverage=1 00:04:39.989 --rc genhtml_legend=1 00:04:39.989 --rc geninfo_all_blocks=1 00:04:39.989 --rc geninfo_unexecuted_blocks=1 00:04:39.989 00:04:39.989 ' 00:04:39.989 12:56:43 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.989 --rc genhtml_branch_coverage=1 00:04:39.989 --rc genhtml_function_coverage=1 00:04:39.989 --rc genhtml_legend=1 00:04:39.989 --rc geninfo_all_blocks=1 00:04:39.989 --rc geninfo_unexecuted_blocks=1 00:04:39.989 00:04:39.989 ' 00:04:39.989 12:56:43 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.989 --rc genhtml_branch_coverage=1 00:04:39.989 --rc genhtml_function_coverage=1 00:04:39.989 --rc genhtml_legend=1 00:04:39.989 --rc geninfo_all_blocks=1 00:04:39.989 --rc geninfo_unexecuted_blocks=1 00:04:39.989 00:04:39.989 ' 00:04:39.989 12:56:43 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.989 --rc genhtml_branch_coverage=1 00:04:39.989 --rc genhtml_function_coverage=1 00:04:39.989 --rc genhtml_legend=1 00:04:39.989 --rc geninfo_all_blocks=1 00:04:39.989 --rc geninfo_unexecuted_blocks=1 00:04:39.989 00:04:39.989 ' 00:04:39.989 12:56:43 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:39.989 12:56:43 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:39.989 12:56:43 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:39.989 12:56:43 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:39.989 12:56:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.989 12:56:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.989 ************************************ 00:04:39.989 START TEST event_perf 00:04:39.989 ************************************ 00:04:39.989 12:56:43 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:39.989 Running I/O for 1 seconds...[2024-11-19 12:56:43.329930] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:39.989 [2024-11-19 12:56:43.330003] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2653823 ] 00:04:40.248 [2024-11-19 12:56:43.410121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:40.248 [2024-11-19 12:56:43.454332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.248 [2024-11-19 12:56:43.454438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:40.248 [2024-11-19 12:56:43.454546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.248 Running I/O for 1 seconds...[2024-11-19 12:56:43.454547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:41.184 00:04:41.184 lcore 0: 202621 00:04:41.184 lcore 1: 202620 00:04:41.184 lcore 2: 202622 00:04:41.184 lcore 3: 202622 00:04:41.184 done. 00:04:41.184 00:04:41.184 real 0m1.187s 00:04:41.184 user 0m4.103s 00:04:41.184 sys 0m0.080s 00:04:41.184 12:56:44 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.184 12:56:44 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:41.184 ************************************ 00:04:41.184 END TEST event_perf 00:04:41.184 ************************************ 00:04:41.184 12:56:44 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:41.184 12:56:44 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:41.184 12:56:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.184 12:56:44 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.444 ************************************ 00:04:41.444 START TEST event_reactor 00:04:41.444 ************************************ 00:04:41.444 12:56:44 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:41.444 [2024-11-19 12:56:44.586929] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:41.444 [2024-11-19 12:56:44.587003] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2654084 ] 00:04:41.444 [2024-11-19 12:56:44.665138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.444 [2024-11-19 12:56:44.705649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.382 test_start 00:04:42.382 oneshot 00:04:42.382 tick 100 00:04:42.382 tick 100 00:04:42.382 tick 250 00:04:42.382 tick 100 00:04:42.382 tick 100 00:04:42.382 tick 250 00:04:42.382 tick 100 00:04:42.382 tick 500 00:04:42.382 tick 100 00:04:42.382 tick 100 00:04:42.382 tick 250 00:04:42.382 tick 100 00:04:42.382 tick 100 00:04:42.382 test_end 00:04:42.382 00:04:42.382 real 0m1.177s 00:04:42.382 user 0m1.097s 00:04:42.382 sys 0m0.076s 00:04:42.382 12:56:45 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.382 12:56:45 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:42.382 ************************************ 00:04:42.382 END TEST event_reactor 00:04:42.382 ************************************ 00:04:42.641 12:56:45 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:42.641 12:56:45 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:42.641 12:56:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.641 12:56:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.641 ************************************ 00:04:42.641 START TEST event_reactor_perf 00:04:42.641 ************************************ 00:04:42.641 12:56:45 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:42.641 [2024-11-19 12:56:45.836365] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:42.641 [2024-11-19 12:56:45.836432] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2654330 ] 00:04:42.641 [2024-11-19 12:56:45.917598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.641 [2024-11-19 12:56:45.959280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.020 test_start 00:04:44.020 test_end 00:04:44.020 Performance: 503537 events per second 00:04:44.020 00:04:44.020 real 0m1.182s 00:04:44.020 user 0m1.099s 00:04:44.020 sys 0m0.079s 00:04:44.020 12:56:46 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.020 12:56:46 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:44.020 ************************************ 00:04:44.020 END TEST event_reactor_perf 00:04:44.020 ************************************ 00:04:44.020 12:56:47 event -- event/event.sh@49 -- # uname -s 00:04:44.020 12:56:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:44.020 12:56:47 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:44.020 12:56:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.020 12:56:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.020 12:56:47 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.020 ************************************ 00:04:44.020 START TEST event_scheduler 00:04:44.020 ************************************ 00:04:44.020 12:56:47 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:44.020 * Looking for test storage... 00:04:44.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:44.020 12:56:47 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.021 12:56:47 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.021 12:56:47 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:44.021 12:56:47 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.021 12:56:47 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:44.021 12:56:47 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.021 12:56:47 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:44.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.021 --rc genhtml_branch_coverage=1 00:04:44.021 --rc genhtml_function_coverage=1 00:04:44.021 --rc genhtml_legend=1 00:04:44.021 --rc geninfo_all_blocks=1 00:04:44.021 --rc geninfo_unexecuted_blocks=1 00:04:44.021 00:04:44.021 ' 00:04:44.021 12:56:47 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:44.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.021 --rc genhtml_branch_coverage=1 00:04:44.021 --rc genhtml_function_coverage=1 00:04:44.021 --rc genhtml_legend=1 00:04:44.021 --rc geninfo_all_blocks=1 00:04:44.021 --rc geninfo_unexecuted_blocks=1 00:04:44.021 00:04:44.021 ' 00:04:44.021 12:56:47 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:44.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.021 --rc genhtml_branch_coverage=1 00:04:44.021 --rc genhtml_function_coverage=1 00:04:44.021 --rc genhtml_legend=1 00:04:44.021 --rc geninfo_all_blocks=1 00:04:44.021 --rc geninfo_unexecuted_blocks=1 00:04:44.021 00:04:44.021 ' 00:04:44.021 12:56:47 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:44.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.021 --rc genhtml_branch_coverage=1 00:04:44.021 --rc genhtml_function_coverage=1 00:04:44.021 --rc genhtml_legend=1 00:04:44.021 --rc geninfo_all_blocks=1 00:04:44.021 --rc geninfo_unexecuted_blocks=1 00:04:44.021 00:04:44.021 ' 00:04:44.021 12:56:47 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:44.021 12:56:47 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2654614 00:04:44.021 12:56:47 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.021 12:56:47 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:44.021 12:56:47 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2654614 00:04:44.021 12:56:47 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2654614 ']' 00:04:44.021 12:56:47 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.021 12:56:47 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.021 12:56:47 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.021 12:56:47 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.021 12:56:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.021 [2024-11-19 12:56:47.297898] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:44.021 [2024-11-19 12:56:47.297953] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2654614 ] 00:04:44.021 [2024-11-19 12:56:47.373898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:44.280 [2024-11-19 12:56:47.418263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.280 [2024-11-19 12:56:47.418370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.280 [2024-11-19 12:56:47.418454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:44.280 [2024-11-19 12:56:47.418455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:44.280 12:56:47 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.280 12:56:47 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:44.280 12:56:47 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:44.280 12:56:47 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.280 12:56:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.280 [2024-11-19 12:56:47.463066] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:44.280 [2024-11-19 12:56:47.463084] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:44.280 [2024-11-19 12:56:47.463093] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:44.280 [2024-11-19 12:56:47.463099] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:44.280 [2024-11-19 12:56:47.463105] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:44.280 12:56:47 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.280 12:56:47 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:44.280 12:56:47 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.280 12:56:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.280 [2024-11-19 12:56:47.541019] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:44.280 12:56:47 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.280 12:56:47 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:44.280 12:56:47 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.280 12:56:47 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.280 12:56:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.280 ************************************ 00:04:44.280 START TEST scheduler_create_thread 00:04:44.280 ************************************ 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.280 2 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.280 3 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.280 4 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.280 5 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.280 6 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.280 7 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.280 8 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.280 9 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.280 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.538 10 00:04:44.538 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.538 12:56:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:44.538 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.538 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.538 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.538 12:56:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:44.538 12:56:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:44.538 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.538 12:56:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.475 12:56:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.475 12:56:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:45.475 12:56:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.475 12:56:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.856 12:56:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.856 12:56:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:46.856 12:56:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:46.856 12:56:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.856 12:56:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.791 12:56:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.791 00:04:47.791 real 0m3.380s 00:04:47.791 user 0m0.024s 00:04:47.791 sys 0m0.005s 00:04:47.791 12:56:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.791 12:56:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.791 ************************************ 00:04:47.791 END TEST scheduler_create_thread 00:04:47.791 ************************************ 00:04:47.791 12:56:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:47.791 12:56:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2654614 00:04:47.791 12:56:50 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2654614 ']' 00:04:47.791 12:56:50 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2654614 00:04:47.791 12:56:50 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:47.791 12:56:50 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.791 12:56:50 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2654614 00:04:47.791 12:56:51 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:47.791 12:56:51 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:47.791 12:56:51 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2654614' 00:04:47.791 killing process with pid 2654614 00:04:47.791 12:56:51 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2654614 00:04:47.791 12:56:51 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2654614 00:04:48.049 [2024-11-19 12:56:51.333223] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:48.308 00:04:48.308 real 0m4.468s 00:04:48.308 user 0m7.796s 00:04:48.308 sys 0m0.382s 00:04:48.308 12:56:51 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.308 12:56:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.308 ************************************ 00:04:48.308 END TEST event_scheduler 00:04:48.308 ************************************ 00:04:48.308 12:56:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:48.308 12:56:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:48.308 12:56:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.308 12:56:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.308 12:56:51 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.308 ************************************ 00:04:48.308 START TEST app_repeat 00:04:48.308 ************************************ 00:04:48.308 12:56:51 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:48.309 12:56:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.309 12:56:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.309 12:56:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:48.309 12:56:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.309 12:56:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:48.309 12:56:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:48.309 12:56:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:48.309 12:56:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2655355 00:04:48.309 12:56:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.309 12:56:51 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:48.309 12:56:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2655355' 00:04:48.309 Process app_repeat pid: 2655355 00:04:48.309 12:56:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:48.309 12:56:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:48.309 spdk_app_start Round 0 00:04:48.309 12:56:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2655355 /var/tmp/spdk-nbd.sock 00:04:48.309 12:56:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2655355 ']' 00:04:48.309 12:56:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:48.309 12:56:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.309 12:56:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:48.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:48.309 12:56:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.309 12:56:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:48.309 [2024-11-19 12:56:51.656189] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:48.309 [2024-11-19 12:56:51.656261] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2655355 ] 00:04:48.567 [2024-11-19 12:56:51.735868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.567 [2024-11-19 12:56:51.778536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.567 [2024-11-19 12:56:51.778537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.567 12:56:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.567 12:56:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:48.567 12:56:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:48.825 Malloc0 00:04:48.825 12:56:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:49.084 Malloc1 00:04:49.084 12:56:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:49.084 12:56:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.084 12:56:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.084 12:56:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:49.084 12:56:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.084 12:56:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:49.084 12:56:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:49.084 12:56:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.084 12:56:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.084 12:56:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:49.084 12:56:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.084 12:56:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:49.084 12:56:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:49.084 12:56:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:49.084 12:56:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.084 12:56:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:49.342 /dev/nbd0 00:04:49.342 12:56:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:49.342 12:56:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:49.342 12:56:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:49.342 12:56:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:49.342 12:56:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:49.342 12:56:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:49.342 12:56:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:49.342 12:56:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:49.342 12:56:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:49.342 12:56:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:49.342 12:56:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:49.342 1+0 records in 00:04:49.342 1+0 records out 00:04:49.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224625 s, 18.2 MB/s 00:04:49.342 12:56:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:49.342 12:56:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:49.342 12:56:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:49.342 12:56:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:49.342 12:56:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:49.342 12:56:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:49.342 12:56:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.342 12:56:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:49.600 /dev/nbd1 00:04:49.600 12:56:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:49.600 12:56:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:49.600 12:56:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:49.600 12:56:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:49.600 12:56:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:49.600 12:56:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:49.600 12:56:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:49.600 12:56:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:49.600 12:56:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:49.600 12:56:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:49.600 12:56:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:49.600 1+0 records in 00:04:49.600 1+0 records out 00:04:49.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235183 s, 17.4 MB/s 00:04:49.600 12:56:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:49.600 12:56:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:49.600 12:56:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:49.600 12:56:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:49.600 12:56:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:49.600 12:56:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:49.600 12:56:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.600 12:56:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.600 12:56:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.600 12:56:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:49.859 { 00:04:49.859 "nbd_device": "/dev/nbd0", 00:04:49.859 "bdev_name": "Malloc0" 00:04:49.859 }, 00:04:49.859 { 00:04:49.859 "nbd_device": "/dev/nbd1", 00:04:49.859 "bdev_name": "Malloc1" 00:04:49.859 } 00:04:49.859 ]' 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:49.859 { 00:04:49.859 "nbd_device": "/dev/nbd0", 00:04:49.859 "bdev_name": "Malloc0" 00:04:49.859 }, 00:04:49.859 { 00:04:49.859 "nbd_device": "/dev/nbd1", 00:04:49.859 "bdev_name": "Malloc1" 00:04:49.859 } 00:04:49.859 ]' 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:49.859 /dev/nbd1' 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:49.859 /dev/nbd1' 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:49.859 256+0 records in 00:04:49.859 256+0 records out 00:04:49.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106687 s, 98.3 MB/s 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:49.859 256+0 records in 00:04:49.859 256+0 records out 00:04:49.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137485 s, 76.3 MB/s 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:49.859 256+0 records in 00:04:49.859 256+0 records out 00:04:49.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014869 s, 70.5 MB/s 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.859 12:56:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:50.118 12:56:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:50.118 12:56:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:50.118 12:56:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:50.118 12:56:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:50.118 12:56:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:50.118 12:56:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:50.118 12:56:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:50.118 12:56:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:50.118 12:56:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:50.118 12:56:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:50.377 12:56:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:50.377 12:56:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:50.377 12:56:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:50.377 12:56:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:50.377 12:56:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:50.377 12:56:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:50.377 12:56:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:50.377 12:56:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:50.377 12:56:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:50.377 12:56:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.377 12:56:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:50.635 12:56:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:50.635 12:56:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:50.635 12:56:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:50.636 12:56:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:50.636 12:56:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:50.636 12:56:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:50.636 12:56:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:50.636 12:56:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:50.636 12:56:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:50.636 12:56:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:50.636 12:56:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:50.636 12:56:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:50.636 12:56:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:50.895 12:56:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:50.895 [2024-11-19 12:56:54.205617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.895 [2024-11-19 12:56:54.242858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.895 [2024-11-19 12:56:54.242859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.154 [2024-11-19 12:56:54.283849] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:51.154 [2024-11-19 12:56:54.283888] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:54.442 12:56:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:54.442 12:56:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:54.442 spdk_app_start Round 1 00:04:54.442 12:56:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2655355 /var/tmp/spdk-nbd.sock 00:04:54.442 12:56:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2655355 ']' 00:04:54.442 12:56:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:54.442 12:56:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.442 12:56:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:54.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:54.442 12:56:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.442 12:56:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:54.442 12:56:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.442 12:56:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:54.442 12:56:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.442 Malloc0 00:04:54.442 12:56:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.442 Malloc1 00:04:54.442 12:56:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.442 12:56:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.442 12:56:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.442 12:56:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:54.442 12:56:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.442 12:56:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:54.442 12:56:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.442 12:56:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.442 12:56:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.442 12:56:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:54.442 12:56:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.442 12:56:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:54.442 12:56:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:54.442 12:56:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:54.442 12:56:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.442 12:56:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:54.701 /dev/nbd0 00:04:54.701 12:56:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:54.701 12:56:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:54.701 12:56:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:54.701 12:56:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:54.701 12:56:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:54.701 12:56:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:54.701 12:56:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:54.701 12:56:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:54.701 12:56:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:54.701 12:56:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:54.701 12:56:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.701 1+0 records in 00:04:54.701 1+0 records out 00:04:54.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224898 s, 18.2 MB/s 00:04:54.701 12:56:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.701 12:56:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:54.701 12:56:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.701 12:56:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:54.701 12:56:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:54.701 12:56:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.701 12:56:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.701 12:56:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:54.960 /dev/nbd1 00:04:54.960 12:56:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:54.960 12:56:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:54.960 12:56:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:54.960 12:56:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:54.960 12:56:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:54.960 12:56:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:54.960 12:56:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:54.960 12:56:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:54.960 12:56:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:54.960 12:56:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:54.960 12:56:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.960 1+0 records in 00:04:54.960 1+0 records out 00:04:54.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261719 s, 15.7 MB/s 00:04:54.960 12:56:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.960 12:56:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:54.960 12:56:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.960 12:56:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:54.960 12:56:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:54.960 12:56:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.960 12:56:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.960 12:56:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.960 12:56:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.960 12:56:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:55.220 { 00:04:55.220 "nbd_device": "/dev/nbd0", 00:04:55.220 "bdev_name": "Malloc0" 00:04:55.220 }, 00:04:55.220 { 00:04:55.220 "nbd_device": "/dev/nbd1", 00:04:55.220 "bdev_name": "Malloc1" 00:04:55.220 } 00:04:55.220 ]' 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:55.220 { 00:04:55.220 "nbd_device": "/dev/nbd0", 00:04:55.220 "bdev_name": "Malloc0" 00:04:55.220 }, 00:04:55.220 { 00:04:55.220 "nbd_device": "/dev/nbd1", 00:04:55.220 "bdev_name": "Malloc1" 00:04:55.220 } 00:04:55.220 ]' 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:55.220 /dev/nbd1' 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:55.220 /dev/nbd1' 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:55.220 256+0 records in 00:04:55.220 256+0 records out 00:04:55.220 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100062 s, 105 MB/s 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:55.220 256+0 records in 00:04:55.220 256+0 records out 00:04:55.220 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138165 s, 75.9 MB/s 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:55.220 256+0 records in 00:04:55.220 256+0 records out 00:04:55.220 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157751 s, 66.5 MB/s 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.220 12:56:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:55.479 12:56:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:55.479 12:56:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:55.479 12:56:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:55.479 12:56:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.479 12:56:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.479 12:56:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:55.479 12:56:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.479 12:56:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.479 12:56:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.479 12:56:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:55.738 12:56:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:55.738 12:56:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:55.738 12:56:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:55.738 12:56:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.738 12:56:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.738 12:56:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:55.738 12:56:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.738 12:56:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.738 12:56:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.738 12:56:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.738 12:56:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.995 12:56:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:55.995 12:56:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:55.995 12:56:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.995 12:56:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:55.995 12:56:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:55.995 12:56:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.995 12:56:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:55.995 12:56:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:55.995 12:56:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:55.995 12:56:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:55.995 12:56:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:55.995 12:56:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:55.995 12:56:59 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:56.253 12:56:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:56.253 [2024-11-19 12:56:59.575294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.253 [2024-11-19 12:56:59.613006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.253 [2024-11-19 12:56:59.613007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.512 [2024-11-19 12:56:59.654878] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:56.512 [2024-11-19 12:56:59.654934] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:59.800 12:57:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:59.800 12:57:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:59.800 spdk_app_start Round 2 00:04:59.800 12:57:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2655355 /var/tmp/spdk-nbd.sock 00:04:59.800 12:57:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2655355 ']' 00:04:59.800 12:57:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:59.800 12:57:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.800 12:57:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:59.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:59.800 12:57:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.800 12:57:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.800 12:57:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.800 12:57:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:59.800 12:57:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.800 Malloc0 00:04:59.800 12:57:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.800 Malloc1 00:04:59.800 12:57:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.800 12:57:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.800 12:57:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.800 12:57:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:59.800 12:57:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.800 12:57:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:59.800 12:57:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.800 12:57:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.800 12:57:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.800 12:57:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:59.800 12:57:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.800 12:57:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:59.800 12:57:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:59.800 12:57:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:59.800 12:57:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.800 12:57:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:00.059 /dev/nbd0 00:05:00.059 12:57:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:00.059 12:57:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:00.059 12:57:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:00.059 12:57:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:00.059 12:57:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:00.059 12:57:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:00.059 12:57:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:00.059 12:57:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:00.059 12:57:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:00.059 12:57:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:00.059 12:57:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.059 1+0 records in 00:05:00.059 1+0 records out 00:05:00.059 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189697 s, 21.6 MB/s 00:05:00.059 12:57:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.059 12:57:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:00.059 12:57:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.059 12:57:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:00.059 12:57:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:00.059 12:57:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.059 12:57:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.059 12:57:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:00.318 /dev/nbd1 00:05:00.318 12:57:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:00.318 12:57:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:00.318 12:57:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:00.318 12:57:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:00.318 12:57:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:00.318 12:57:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:00.318 12:57:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:00.318 12:57:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:00.318 12:57:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:00.318 12:57:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:00.318 12:57:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.318 1+0 records in 00:05:00.318 1+0 records out 00:05:00.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239683 s, 17.1 MB/s 00:05:00.318 12:57:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.318 12:57:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:00.318 12:57:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.318 12:57:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:00.318 12:57:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:00.318 12:57:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.318 12:57:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.318 12:57:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.318 12:57:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.318 12:57:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.586 12:57:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:00.586 { 00:05:00.586 "nbd_device": "/dev/nbd0", 00:05:00.586 "bdev_name": "Malloc0" 00:05:00.586 }, 00:05:00.586 { 00:05:00.586 "nbd_device": "/dev/nbd1", 00:05:00.586 "bdev_name": "Malloc1" 00:05:00.586 } 00:05:00.586 ]' 00:05:00.586 12:57:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:00.586 { 00:05:00.586 "nbd_device": "/dev/nbd0", 00:05:00.586 "bdev_name": "Malloc0" 00:05:00.586 }, 00:05:00.586 { 00:05:00.587 "nbd_device": "/dev/nbd1", 00:05:00.587 "bdev_name": "Malloc1" 00:05:00.587 } 00:05:00.587 ]' 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:00.587 /dev/nbd1' 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:00.587 /dev/nbd1' 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:00.587 256+0 records in 00:05:00.587 256+0 records out 00:05:00.587 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106839 s, 98.1 MB/s 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:00.587 256+0 records in 00:05:00.587 256+0 records out 00:05:00.587 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143815 s, 72.9 MB/s 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:00.587 256+0 records in 00:05:00.587 256+0 records out 00:05:00.587 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152255 s, 68.9 MB/s 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.587 12:57:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:00.845 12:57:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:00.845 12:57:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:00.845 12:57:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:00.845 12:57:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.845 12:57:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.845 12:57:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:00.845 12:57:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.845 12:57:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.845 12:57:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.845 12:57:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.104 12:57:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.104 12:57:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.104 12:57:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.104 12:57:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.104 12:57:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.104 12:57:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.104 12:57:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.104 12:57:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.104 12:57:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.104 12:57:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.104 12:57:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.363 12:57:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:01.363 12:57:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:01.363 12:57:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.363 12:57:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:01.363 12:57:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:01.363 12:57:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.363 12:57:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:01.363 12:57:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:01.363 12:57:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:01.363 12:57:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:01.363 12:57:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:01.363 12:57:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:01.363 12:57:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:01.622 12:57:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:01.622 [2024-11-19 12:57:04.934726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.622 [2024-11-19 12:57:04.972496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.622 [2024-11-19 12:57:04.972497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.880 [2024-11-19 12:57:05.013814] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:01.880 [2024-11-19 12:57:05.013853] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:04.413 12:57:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2655355 /var/tmp/spdk-nbd.sock 00:05:04.413 12:57:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2655355 ']' 00:05:04.413 12:57:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.413 12:57:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.413 12:57:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.413 12:57:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.413 12:57:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:04.672 12:57:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.672 12:57:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:04.672 12:57:07 event.app_repeat -- event/event.sh@39 -- # killprocess 2655355 00:05:04.672 12:57:07 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2655355 ']' 00:05:04.672 12:57:07 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2655355 00:05:04.673 12:57:07 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:04.673 12:57:07 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.673 12:57:07 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2655355 00:05:04.673 12:57:08 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.673 12:57:08 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.673 12:57:08 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2655355' 00:05:04.673 killing process with pid 2655355 00:05:04.673 12:57:08 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2655355 00:05:04.673 12:57:08 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2655355 00:05:04.932 spdk_app_start is called in Round 0. 00:05:04.932 Shutdown signal received, stop current app iteration 00:05:04.932 Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 reinitialization... 00:05:04.932 spdk_app_start is called in Round 1. 00:05:04.932 Shutdown signal received, stop current app iteration 00:05:04.932 Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 reinitialization... 00:05:04.932 spdk_app_start is called in Round 2. 00:05:04.932 Shutdown signal received, stop current app iteration 00:05:04.932 Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 reinitialization... 00:05:04.932 spdk_app_start is called in Round 3. 00:05:04.932 Shutdown signal received, stop current app iteration 00:05:04.932 12:57:08 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:04.932 12:57:08 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:04.932 00:05:04.932 real 0m16.552s 00:05:04.932 user 0m36.459s 00:05:04.932 sys 0m2.534s 00:05:04.932 12:57:08 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.932 12:57:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:04.932 ************************************ 00:05:04.932 END TEST app_repeat 00:05:04.932 ************************************ 00:05:04.932 12:57:08 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:04.932 12:57:08 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:04.932 12:57:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.932 12:57:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.932 12:57:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.932 ************************************ 00:05:04.932 START TEST cpu_locks 00:05:04.932 ************************************ 00:05:04.932 12:57:08 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:05.192 * Looking for test storage... 00:05:05.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:05.192 12:57:08 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:05.192 12:57:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:05.192 12:57:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:05.192 12:57:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.192 12:57:08 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:05.192 12:57:08 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.192 12:57:08 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:05.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.192 --rc genhtml_branch_coverage=1 00:05:05.192 --rc genhtml_function_coverage=1 00:05:05.192 --rc genhtml_legend=1 00:05:05.192 --rc geninfo_all_blocks=1 00:05:05.192 --rc geninfo_unexecuted_blocks=1 00:05:05.192 00:05:05.192 ' 00:05:05.192 12:57:08 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:05.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.192 --rc genhtml_branch_coverage=1 00:05:05.192 --rc genhtml_function_coverage=1 00:05:05.192 --rc genhtml_legend=1 00:05:05.192 --rc geninfo_all_blocks=1 00:05:05.192 --rc geninfo_unexecuted_blocks=1 00:05:05.192 00:05:05.192 ' 00:05:05.192 12:57:08 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:05.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.192 --rc genhtml_branch_coverage=1 00:05:05.192 --rc genhtml_function_coverage=1 00:05:05.192 --rc genhtml_legend=1 00:05:05.192 --rc geninfo_all_blocks=1 00:05:05.192 --rc geninfo_unexecuted_blocks=1 00:05:05.192 00:05:05.192 ' 00:05:05.192 12:57:08 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:05.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.192 --rc genhtml_branch_coverage=1 00:05:05.192 --rc genhtml_function_coverage=1 00:05:05.192 --rc genhtml_legend=1 00:05:05.192 --rc geninfo_all_blocks=1 00:05:05.192 --rc geninfo_unexecuted_blocks=1 00:05:05.192 00:05:05.192 ' 00:05:05.192 12:57:08 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:05.192 12:57:08 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:05.192 12:57:08 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:05.192 12:57:08 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:05.192 12:57:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.192 12:57:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.192 12:57:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.192 ************************************ 00:05:05.192 START TEST default_locks 00:05:05.192 ************************************ 00:05:05.192 12:57:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:05.192 12:57:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.192 12:57:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2658896 00:05:05.192 12:57:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2658896 00:05:05.192 12:57:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2658896 ']' 00:05:05.192 12:57:08 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.192 12:57:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.192 12:57:08 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.192 12:57:08 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.192 12:57:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.192 [2024-11-19 12:57:08.494703] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:05.192 [2024-11-19 12:57:08.494746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2658896 ] 00:05:05.451 [2024-11-19 12:57:08.569931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.451 [2024-11-19 12:57:08.610846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.710 12:57:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.710 12:57:08 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:05.710 12:57:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2658896 00:05:05.710 12:57:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2658896 00:05:05.710 12:57:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:05.980 lslocks: write error 00:05:05.980 12:57:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2658896 00:05:05.980 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2658896 ']' 00:05:05.980 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2658896 00:05:05.980 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:05.980 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.980 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2658896 00:05:06.244 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.244 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.244 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2658896' 00:05:06.244 killing process with pid 2658896 00:05:06.244 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2658896 00:05:06.244 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2658896 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2658896 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2658896 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2658896 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2658896 ']' 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2658896) - No such process 00:05:06.504 ERROR: process (pid: 2658896) is no longer running 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:06.504 00:05:06.504 real 0m1.222s 00:05:06.504 user 0m1.193s 00:05:06.504 sys 0m0.541s 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.504 12:57:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.504 ************************************ 00:05:06.504 END TEST default_locks 00:05:06.504 ************************************ 00:05:06.505 12:57:09 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:06.505 12:57:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.505 12:57:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.505 12:57:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.505 ************************************ 00:05:06.505 START TEST default_locks_via_rpc 00:05:06.505 ************************************ 00:05:06.505 12:57:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:06.505 12:57:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2659152 00:05:06.505 12:57:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.505 12:57:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2659152 00:05:06.505 12:57:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2659152 ']' 00:05:06.505 12:57:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.505 12:57:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.505 12:57:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.505 12:57:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.505 12:57:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.505 [2024-11-19 12:57:09.788768] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:06.505 [2024-11-19 12:57:09.788814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2659152 ] 00:05:06.505 [2024-11-19 12:57:09.862578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.763 [2024-11-19 12:57:09.901513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.763 12:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.763 12:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:06.763 12:57:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:06.763 12:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.763 12:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.763 12:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.763 12:57:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:06.763 12:57:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:06.763 12:57:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:06.763 12:57:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:06.763 12:57:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:06.763 12:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.763 12:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.021 12:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.021 12:57:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2659152 00:05:07.021 12:57:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2659152 00:05:07.021 12:57:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:07.280 12:57:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2659152 00:05:07.280 12:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2659152 ']' 00:05:07.280 12:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2659152 00:05:07.280 12:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:07.280 12:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.280 12:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2659152 00:05:07.280 12:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.280 12:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.280 12:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2659152' 00:05:07.280 killing process with pid 2659152 00:05:07.280 12:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2659152 00:05:07.280 12:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2659152 00:05:07.539 00:05:07.539 real 0m1.113s 00:05:07.539 user 0m1.083s 00:05:07.539 sys 0m0.501s 00:05:07.539 12:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.539 12:57:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.539 ************************************ 00:05:07.539 END TEST default_locks_via_rpc 00:05:07.539 ************************************ 00:05:07.539 12:57:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:07.539 12:57:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.539 12:57:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.539 12:57:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.798 ************************************ 00:05:07.799 START TEST non_locking_app_on_locked_coremask 00:05:07.799 ************************************ 00:05:07.799 12:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:07.799 12:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2659406 00:05:07.799 12:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2659406 /var/tmp/spdk.sock 00:05:07.799 12:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.799 12:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2659406 ']' 00:05:07.799 12:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.799 12:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.799 12:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.799 12:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.799 12:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.799 [2024-11-19 12:57:10.971345] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:07.799 [2024-11-19 12:57:10.971388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2659406 ] 00:05:07.799 [2024-11-19 12:57:11.044650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.799 [2024-11-19 12:57:11.087049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.058 12:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.058 12:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:08.058 12:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2659456 00:05:08.058 12:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2659456 /var/tmp/spdk2.sock 00:05:08.058 12:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:08.058 12:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2659456 ']' 00:05:08.058 12:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:08.058 12:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.058 12:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:08.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:08.058 12:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.058 12:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.058 [2024-11-19 12:57:11.360031] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:08.058 [2024-11-19 12:57:11.360084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2659456 ] 00:05:08.317 [2024-11-19 12:57:11.454082] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:08.317 [2024-11-19 12:57:11.454110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.317 [2024-11-19 12:57:11.543100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.884 12:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.884 12:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:08.885 12:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2659406 00:05:08.885 12:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2659406 00:05:08.885 12:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.451 lslocks: write error 00:05:09.451 12:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2659406 00:05:09.451 12:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2659406 ']' 00:05:09.451 12:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2659406 00:05:09.451 12:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:09.451 12:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.451 12:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2659406 00:05:09.710 12:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.710 12:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.710 12:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2659406' 00:05:09.710 killing process with pid 2659406 00:05:09.710 12:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2659406 00:05:09.710 12:57:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2659406 00:05:10.278 12:57:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2659456 00:05:10.278 12:57:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2659456 ']' 00:05:10.278 12:57:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2659456 00:05:10.278 12:57:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:10.278 12:57:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.278 12:57:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2659456 00:05:10.278 12:57:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.278 12:57:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.278 12:57:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2659456' 00:05:10.278 killing process with pid 2659456 00:05:10.278 12:57:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2659456 00:05:10.278 12:57:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2659456 00:05:10.538 00:05:10.538 real 0m2.857s 00:05:10.538 user 0m3.029s 00:05:10.538 sys 0m0.946s 00:05:10.538 12:57:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.538 12:57:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.538 ************************************ 00:05:10.538 END TEST non_locking_app_on_locked_coremask 00:05:10.538 ************************************ 00:05:10.538 12:57:13 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:10.538 12:57:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.538 12:57:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.538 12:57:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.538 ************************************ 00:05:10.538 START TEST locking_app_on_unlocked_coremask 00:05:10.538 ************************************ 00:05:10.538 12:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:10.538 12:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2659911 00:05:10.538 12:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2659911 /var/tmp/spdk.sock 00:05:10.538 12:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:10.538 12:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2659911 ']' 00:05:10.538 12:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.538 12:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.538 12:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.538 12:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.538 12:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.538 [2024-11-19 12:57:13.901465] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:10.538 [2024-11-19 12:57:13.901507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2659911 ] 00:05:10.797 [2024-11-19 12:57:13.974374] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:10.797 [2024-11-19 12:57:13.974399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.797 [2024-11-19 12:57:14.017079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.057 12:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.057 12:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:11.057 12:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2660065 00:05:11.057 12:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2660065 /var/tmp/spdk2.sock 00:05:11.057 12:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:11.057 12:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2660065 ']' 00:05:11.057 12:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.057 12:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.057 12:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.057 12:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.057 12:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.057 [2024-11-19 12:57:14.290567] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:11.057 [2024-11-19 12:57:14.290617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2660065 ] 00:05:11.057 [2024-11-19 12:57:14.382301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.316 [2024-11-19 12:57:14.471484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.884 12:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.884 12:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:11.884 12:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2660065 00:05:11.884 12:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.884 12:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2660065 00:05:12.452 lslocks: write error 00:05:12.452 12:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2659911 00:05:12.452 12:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2659911 ']' 00:05:12.452 12:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2659911 00:05:12.452 12:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:12.452 12:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.452 12:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2659911 00:05:12.452 12:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.452 12:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.452 12:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2659911' 00:05:12.452 killing process with pid 2659911 00:05:12.452 12:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2659911 00:05:12.452 12:57:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2659911 00:05:13.020 12:57:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2660065 00:05:13.020 12:57:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2660065 ']' 00:05:13.020 12:57:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2660065 00:05:13.020 12:57:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:13.020 12:57:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.020 12:57:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2660065 00:05:13.280 12:57:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.280 12:57:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.280 12:57:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2660065' 00:05:13.280 killing process with pid 2660065 00:05:13.280 12:57:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2660065 00:05:13.280 12:57:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2660065 00:05:13.540 00:05:13.540 real 0m2.864s 00:05:13.540 user 0m3.011s 00:05:13.540 sys 0m0.950s 00:05:13.540 12:57:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.540 12:57:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.540 ************************************ 00:05:13.540 END TEST locking_app_on_unlocked_coremask 00:05:13.540 ************************************ 00:05:13.540 12:57:16 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:13.540 12:57:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.540 12:57:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.540 12:57:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.540 ************************************ 00:05:13.540 START TEST locking_app_on_locked_coremask 00:05:13.540 ************************************ 00:05:13.540 12:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:13.540 12:57:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2660428 00:05:13.540 12:57:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2660428 /var/tmp/spdk.sock 00:05:13.540 12:57:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.540 12:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2660428 ']' 00:05:13.540 12:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.540 12:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.540 12:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.540 12:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.540 12:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.540 [2024-11-19 12:57:16.834693] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:13.540 [2024-11-19 12:57:16.834737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2660428 ] 00:05:13.540 [2024-11-19 12:57:16.908277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.800 [2024-11-19 12:57:16.951576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.800 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.800 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:13.800 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2660626 00:05:13.800 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2660626 /var/tmp/spdk2.sock 00:05:13.800 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:13.800 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:13.800 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2660626 /var/tmp/spdk2.sock 00:05:13.800 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:13.800 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.800 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:14.059 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:14.059 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2660626 /var/tmp/spdk2.sock 00:05:14.059 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2660626 ']' 00:05:14.059 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.059 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.059 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.059 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.059 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.059 [2024-11-19 12:57:17.224573] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:14.059 [2024-11-19 12:57:17.224620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2660626 ] 00:05:14.059 [2024-11-19 12:57:17.316104] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2660428 has claimed it. 00:05:14.059 [2024-11-19 12:57:17.316144] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:14.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2660626) - No such process 00:05:14.627 ERROR: process (pid: 2660626) is no longer running 00:05:14.627 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.627 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:14.627 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:14.627 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:14.627 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:14.627 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:14.627 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2660428 00:05:14.627 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2660428 00:05:14.627 12:57:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:14.886 lslocks: write error 00:05:14.886 12:57:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2660428 00:05:14.886 12:57:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2660428 ']' 00:05:14.886 12:57:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2660428 00:05:14.886 12:57:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:14.886 12:57:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.886 12:57:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2660428 00:05:15.146 12:57:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.146 12:57:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.146 12:57:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2660428' 00:05:15.146 killing process with pid 2660428 00:05:15.146 12:57:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2660428 00:05:15.146 12:57:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2660428 00:05:15.405 00:05:15.405 real 0m1.789s 00:05:15.405 user 0m1.925s 00:05:15.405 sys 0m0.585s 00:05:15.405 12:57:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.405 12:57:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.405 ************************************ 00:05:15.405 END TEST locking_app_on_locked_coremask 00:05:15.405 ************************************ 00:05:15.405 12:57:18 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:15.405 12:57:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.405 12:57:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.405 12:57:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.405 ************************************ 00:05:15.405 START TEST locking_overlapped_coremask 00:05:15.405 ************************************ 00:05:15.405 12:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:15.405 12:57:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2660895 00:05:15.405 12:57:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2660895 /var/tmp/spdk.sock 00:05:15.405 12:57:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:15.405 12:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2660895 ']' 00:05:15.405 12:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.405 12:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.405 12:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.405 12:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.405 12:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.405 [2024-11-19 12:57:18.693307] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:15.405 [2024-11-19 12:57:18.693350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2660895 ] 00:05:15.405 [2024-11-19 12:57:18.769644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:15.665 [2024-11-19 12:57:18.815144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.665 [2024-11-19 12:57:18.815261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.665 [2024-11-19 12:57:18.815262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.665 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.665 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:15.665 12:57:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2660903 00:05:15.665 12:57:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2660903 /var/tmp/spdk2.sock 00:05:15.665 12:57:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:15.665 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:15.665 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2660903 /var/tmp/spdk2.sock 00:05:15.665 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:15.665 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:15.665 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:15.665 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:15.665 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2660903 /var/tmp/spdk2.sock 00:05:15.665 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2660903 ']' 00:05:15.665 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.665 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.665 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.665 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.665 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.929 [2024-11-19 12:57:19.078995] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:15.929 [2024-11-19 12:57:19.079040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2660903 ] 00:05:15.929 [2024-11-19 12:57:19.171870] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2660895 has claimed it. 00:05:15.930 [2024-11-19 12:57:19.171907] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:16.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2660903) - No such process 00:05:16.505 ERROR: process (pid: 2660903) is no longer running 00:05:16.505 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.505 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:16.505 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:16.505 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:16.505 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:16.505 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:16.505 12:57:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:16.505 12:57:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:16.505 12:57:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:16.505 12:57:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:16.505 12:57:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2660895 00:05:16.505 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2660895 ']' 00:05:16.505 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2660895 00:05:16.505 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:16.505 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.505 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2660895 00:05:16.505 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.505 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.505 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2660895' 00:05:16.505 killing process with pid 2660895 00:05:16.505 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2660895 00:05:16.505 12:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2660895 00:05:16.764 00:05:16.764 real 0m1.437s 00:05:16.764 user 0m3.954s 00:05:16.764 sys 0m0.379s 00:05:16.764 12:57:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.764 12:57:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.764 ************************************ 00:05:16.764 END TEST locking_overlapped_coremask 00:05:16.764 ************************************ 00:05:16.764 12:57:20 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:16.764 12:57:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.764 12:57:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.764 12:57:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.023 ************************************ 00:05:17.023 START TEST locking_overlapped_coremask_via_rpc 00:05:17.023 ************************************ 00:05:17.024 12:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:17.024 12:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2661159 00:05:17.024 12:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:17.024 12:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2661159 /var/tmp/spdk.sock 00:05:17.024 12:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2661159 ']' 00:05:17.024 12:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.024 12:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.024 12:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.024 12:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.024 12:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.024 [2024-11-19 12:57:20.200781] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:17.024 [2024-11-19 12:57:20.200822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2661159 ] 00:05:17.024 [2024-11-19 12:57:20.277270] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:17.024 [2024-11-19 12:57:20.277298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:17.024 [2024-11-19 12:57:20.320284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.024 [2024-11-19 12:57:20.320389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.024 [2024-11-19 12:57:20.320389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.345 12:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.345 12:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:17.345 12:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2661172 00:05:17.345 12:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2661172 /var/tmp/spdk2.sock 00:05:17.345 12:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:17.345 12:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2661172 ']' 00:05:17.345 12:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.345 12:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.345 12:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.345 12:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.345 12:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.345 [2024-11-19 12:57:20.593236] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:17.345 [2024-11-19 12:57:20.593282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2661172 ] 00:05:17.621 [2024-11-19 12:57:20.685044] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:17.621 [2024-11-19 12:57:20.685076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:17.621 [2024-11-19 12:57:20.773787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.621 [2024-11-19 12:57:20.776994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.621 [2024-11-19 12:57:20.776995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.217 [2024-11-19 12:57:21.454029] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2661159 has claimed it. 00:05:18.217 request: 00:05:18.217 { 00:05:18.217 "method": "framework_enable_cpumask_locks", 00:05:18.217 "req_id": 1 00:05:18.217 } 00:05:18.217 Got JSON-RPC error response 00:05:18.217 response: 00:05:18.217 { 00:05:18.217 "code": -32603, 00:05:18.217 "message": "Failed to claim CPU core: 2" 00:05:18.217 } 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2661159 /var/tmp/spdk.sock 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2661159 ']' 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.217 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.476 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.476 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:18.476 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2661172 /var/tmp/spdk2.sock 00:05:18.476 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2661172 ']' 00:05:18.476 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.476 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.476 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.476 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.476 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.735 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.735 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:18.735 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:18.735 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:18.735 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:18.735 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:18.735 00:05:18.735 real 0m1.716s 00:05:18.735 user 0m0.818s 00:05:18.735 sys 0m0.145s 00:05:18.735 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.735 12:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.735 ************************************ 00:05:18.735 END TEST locking_overlapped_coremask_via_rpc 00:05:18.735 ************************************ 00:05:18.735 12:57:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:18.735 12:57:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2661159 ]] 00:05:18.735 12:57:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2661159 00:05:18.735 12:57:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2661159 ']' 00:05:18.735 12:57:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2661159 00:05:18.735 12:57:21 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:18.735 12:57:21 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.735 12:57:21 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2661159 00:05:18.735 12:57:21 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.735 12:57:21 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.735 12:57:21 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2661159' 00:05:18.735 killing process with pid 2661159 00:05:18.735 12:57:21 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2661159 00:05:18.735 12:57:21 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2661159 00:05:18.994 12:57:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2661172 ]] 00:05:18.994 12:57:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2661172 00:05:18.994 12:57:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2661172 ']' 00:05:18.994 12:57:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2661172 00:05:18.994 12:57:22 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:18.994 12:57:22 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.994 12:57:22 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2661172 00:05:18.994 12:57:22 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:18.994 12:57:22 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:18.994 12:57:22 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2661172' 00:05:18.994 killing process with pid 2661172 00:05:18.994 12:57:22 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2661172 00:05:18.994 12:57:22 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2661172 00:05:19.254 12:57:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:19.254 12:57:22 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:19.254 12:57:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2661159 ]] 00:05:19.254 12:57:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2661159 00:05:19.254 12:57:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2661159 ']' 00:05:19.254 12:57:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2661159 00:05:19.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2661159) - No such process 00:05:19.254 12:57:22 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2661159 is not found' 00:05:19.254 Process with pid 2661159 is not found 00:05:19.254 12:57:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2661172 ]] 00:05:19.254 12:57:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2661172 00:05:19.254 12:57:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2661172 ']' 00:05:19.254 12:57:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2661172 00:05:19.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2661172) - No such process 00:05:19.254 12:57:22 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2661172 is not found' 00:05:19.254 Process with pid 2661172 is not found 00:05:19.254 12:57:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:19.254 00:05:19.254 real 0m14.384s 00:05:19.254 user 0m24.773s 00:05:19.254 sys 0m4.987s 00:05:19.254 12:57:22 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.254 12:57:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.254 ************************************ 00:05:19.254 END TEST cpu_locks 00:05:19.254 ************************************ 00:05:19.514 00:05:19.514 real 0m39.557s 00:05:19.514 user 1m15.600s 00:05:19.514 sys 0m8.512s 00:05:19.514 12:57:22 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.514 12:57:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.514 ************************************ 00:05:19.514 END TEST event 00:05:19.514 ************************************ 00:05:19.514 12:57:22 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:19.514 12:57:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.514 12:57:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.514 12:57:22 -- common/autotest_common.sh@10 -- # set +x 00:05:19.514 ************************************ 00:05:19.514 START TEST thread 00:05:19.514 ************************************ 00:05:19.514 12:57:22 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:19.514 * Looking for test storage... 00:05:19.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:19.514 12:57:22 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:19.514 12:57:22 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:19.514 12:57:22 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:19.514 12:57:22 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:19.514 12:57:22 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.514 12:57:22 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.514 12:57:22 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.514 12:57:22 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.514 12:57:22 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.514 12:57:22 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.514 12:57:22 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.514 12:57:22 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.773 12:57:22 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.773 12:57:22 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.773 12:57:22 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.773 12:57:22 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:19.773 12:57:22 thread -- scripts/common.sh@345 -- # : 1 00:05:19.773 12:57:22 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.773 12:57:22 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.773 12:57:22 thread -- scripts/common.sh@365 -- # decimal 1 00:05:19.773 12:57:22 thread -- scripts/common.sh@353 -- # local d=1 00:05:19.773 12:57:22 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.773 12:57:22 thread -- scripts/common.sh@355 -- # echo 1 00:05:19.773 12:57:22 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.773 12:57:22 thread -- scripts/common.sh@366 -- # decimal 2 00:05:19.773 12:57:22 thread -- scripts/common.sh@353 -- # local d=2 00:05:19.773 12:57:22 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.773 12:57:22 thread -- scripts/common.sh@355 -- # echo 2 00:05:19.773 12:57:22 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.773 12:57:22 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.773 12:57:22 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.773 12:57:22 thread -- scripts/common.sh@368 -- # return 0 00:05:19.773 12:57:22 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.773 12:57:22 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:19.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.773 --rc genhtml_branch_coverage=1 00:05:19.773 --rc genhtml_function_coverage=1 00:05:19.773 --rc genhtml_legend=1 00:05:19.773 --rc geninfo_all_blocks=1 00:05:19.773 --rc geninfo_unexecuted_blocks=1 00:05:19.773 00:05:19.773 ' 00:05:19.773 12:57:22 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:19.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.773 --rc genhtml_branch_coverage=1 00:05:19.773 --rc genhtml_function_coverage=1 00:05:19.773 --rc genhtml_legend=1 00:05:19.773 --rc geninfo_all_blocks=1 00:05:19.773 --rc geninfo_unexecuted_blocks=1 00:05:19.773 00:05:19.773 ' 00:05:19.773 12:57:22 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:19.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.773 --rc genhtml_branch_coverage=1 00:05:19.773 --rc genhtml_function_coverage=1 00:05:19.773 --rc genhtml_legend=1 00:05:19.773 --rc geninfo_all_blocks=1 00:05:19.773 --rc geninfo_unexecuted_blocks=1 00:05:19.773 00:05:19.773 ' 00:05:19.773 12:57:22 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:19.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.773 --rc genhtml_branch_coverage=1 00:05:19.773 --rc genhtml_function_coverage=1 00:05:19.774 --rc genhtml_legend=1 00:05:19.774 --rc geninfo_all_blocks=1 00:05:19.774 --rc geninfo_unexecuted_blocks=1 00:05:19.774 00:05:19.774 ' 00:05:19.774 12:57:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:19.774 12:57:22 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:19.774 12:57:22 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.774 12:57:22 thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.774 ************************************ 00:05:19.774 START TEST thread_poller_perf 00:05:19.774 ************************************ 00:05:19.774 12:57:22 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:19.774 [2024-11-19 12:57:22.955440] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:19.774 [2024-11-19 12:57:22.955510] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2661734 ] 00:05:19.774 [2024-11-19 12:57:23.030660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.774 [2024-11-19 12:57:23.070793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.774 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:21.152 [2024-11-19T11:57:24.529Z] ====================================== 00:05:21.152 [2024-11-19T11:57:24.529Z] busy:2306118110 (cyc) 00:05:21.152 [2024-11-19T11:57:24.529Z] total_run_count: 406000 00:05:21.152 [2024-11-19T11:57:24.529Z] tsc_hz: 2300000000 (cyc) 00:05:21.152 [2024-11-19T11:57:24.529Z] ====================================== 00:05:21.152 [2024-11-19T11:57:24.529Z] poller_cost: 5680 (cyc), 2469 (nsec) 00:05:21.152 00:05:21.152 real 0m1.179s 00:05:21.152 user 0m1.099s 00:05:21.152 sys 0m0.076s 00:05:21.152 12:57:24 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.152 12:57:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:21.152 ************************************ 00:05:21.152 END TEST thread_poller_perf 00:05:21.152 ************************************ 00:05:21.152 12:57:24 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:21.152 12:57:24 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:21.152 12:57:24 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.152 12:57:24 thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.152 ************************************ 00:05:21.152 START TEST thread_poller_perf 00:05:21.152 ************************************ 00:05:21.152 12:57:24 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:21.152 [2024-11-19 12:57:24.204023] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:21.152 [2024-11-19 12:57:24.204085] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2661983 ] 00:05:21.152 [2024-11-19 12:57:24.283766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.152 [2024-11-19 12:57:24.324835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.152 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:22.113 [2024-11-19T11:57:25.490Z] ====================================== 00:05:22.113 [2024-11-19T11:57:25.490Z] busy:2301647086 (cyc) 00:05:22.113 [2024-11-19T11:57:25.490Z] total_run_count: 5391000 00:05:22.113 [2024-11-19T11:57:25.490Z] tsc_hz: 2300000000 (cyc) 00:05:22.113 [2024-11-19T11:57:25.490Z] ====================================== 00:05:22.113 [2024-11-19T11:57:25.490Z] poller_cost: 426 (cyc), 185 (nsec) 00:05:22.113 00:05:22.113 real 0m1.181s 00:05:22.113 user 0m1.102s 00:05:22.113 sys 0m0.075s 00:05:22.113 12:57:25 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.113 12:57:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:22.113 ************************************ 00:05:22.113 END TEST thread_poller_perf 00:05:22.113 ************************************ 00:05:22.113 12:57:25 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:22.113 00:05:22.113 real 0m2.666s 00:05:22.113 user 0m2.358s 00:05:22.113 sys 0m0.322s 00:05:22.113 12:57:25 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.113 12:57:25 thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.113 ************************************ 00:05:22.113 END TEST thread 00:05:22.113 ************************************ 00:05:22.113 12:57:25 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:22.113 12:57:25 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:22.113 12:57:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.113 12:57:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.113 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:05:22.113 ************************************ 00:05:22.113 START TEST app_cmdline 00:05:22.113 ************************************ 00:05:22.114 12:57:25 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:22.374 * Looking for test storage... 00:05:22.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:22.374 12:57:25 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:22.374 12:57:25 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:22.374 12:57:25 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:22.374 12:57:25 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.374 12:57:25 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:22.374 12:57:25 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.374 12:57:25 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:22.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.374 --rc genhtml_branch_coverage=1 00:05:22.374 --rc genhtml_function_coverage=1 00:05:22.374 --rc genhtml_legend=1 00:05:22.374 --rc geninfo_all_blocks=1 00:05:22.374 --rc geninfo_unexecuted_blocks=1 00:05:22.374 00:05:22.374 ' 00:05:22.374 12:57:25 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:22.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.374 --rc genhtml_branch_coverage=1 00:05:22.374 --rc genhtml_function_coverage=1 00:05:22.374 --rc genhtml_legend=1 00:05:22.374 --rc geninfo_all_blocks=1 00:05:22.374 --rc geninfo_unexecuted_blocks=1 00:05:22.374 00:05:22.374 ' 00:05:22.374 12:57:25 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:22.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.374 --rc genhtml_branch_coverage=1 00:05:22.374 --rc genhtml_function_coverage=1 00:05:22.374 --rc genhtml_legend=1 00:05:22.374 --rc geninfo_all_blocks=1 00:05:22.374 --rc geninfo_unexecuted_blocks=1 00:05:22.374 00:05:22.374 ' 00:05:22.374 12:57:25 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:22.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.374 --rc genhtml_branch_coverage=1 00:05:22.374 --rc genhtml_function_coverage=1 00:05:22.374 --rc genhtml_legend=1 00:05:22.374 --rc geninfo_all_blocks=1 00:05:22.374 --rc geninfo_unexecuted_blocks=1 00:05:22.374 00:05:22.374 ' 00:05:22.374 12:57:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:22.374 12:57:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2662290 00:05:22.374 12:57:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2662290 00:05:22.374 12:57:25 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:22.374 12:57:25 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2662290 ']' 00:05:22.374 12:57:25 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.374 12:57:25 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.374 12:57:25 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.374 12:57:25 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.374 12:57:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:22.374 [2024-11-19 12:57:25.702225] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:22.374 [2024-11-19 12:57:25.702273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2662290 ] 00:05:22.633 [2024-11-19 12:57:25.776314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.633 [2024-11-19 12:57:25.816406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.892 12:57:26 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.892 12:57:26 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:22.892 12:57:26 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:22.892 { 00:05:22.892 "version": "SPDK v25.01-pre git sha1 dcc2ca8f3", 00:05:22.892 "fields": { 00:05:22.892 "major": 25, 00:05:22.892 "minor": 1, 00:05:22.892 "patch": 0, 00:05:22.892 "suffix": "-pre", 00:05:22.892 "commit": "dcc2ca8f3" 00:05:22.892 } 00:05:22.892 } 00:05:22.892 12:57:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:22.892 12:57:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:22.892 12:57:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:22.892 12:57:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:22.892 12:57:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:22.892 12:57:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:22.892 12:57:26 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.892 12:57:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:22.892 12:57:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:22.892 12:57:26 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.892 12:57:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:22.892 12:57:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:22.892 12:57:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:22.892 12:57:26 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:22.892 12:57:26 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:22.892 12:57:26 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:22.892 12:57:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.892 12:57:26 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:22.892 12:57:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.892 12:57:26 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:22.892 12:57:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.892 12:57:26 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:22.892 12:57:26 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:22.892 12:57:26 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:23.151 request: 00:05:23.151 { 00:05:23.151 "method": "env_dpdk_get_mem_stats", 00:05:23.151 "req_id": 1 00:05:23.151 } 00:05:23.151 Got JSON-RPC error response 00:05:23.151 response: 00:05:23.151 { 00:05:23.151 "code": -32601, 00:05:23.151 "message": "Method not found" 00:05:23.151 } 00:05:23.151 12:57:26 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:23.151 12:57:26 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:23.151 12:57:26 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:23.151 12:57:26 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:23.151 12:57:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2662290 00:05:23.151 12:57:26 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2662290 ']' 00:05:23.151 12:57:26 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2662290 00:05:23.151 12:57:26 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:23.151 12:57:26 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.151 12:57:26 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2662290 00:05:23.151 12:57:26 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.151 12:57:26 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.151 12:57:26 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2662290' 00:05:23.151 killing process with pid 2662290 00:05:23.151 12:57:26 app_cmdline -- common/autotest_common.sh@973 -- # kill 2662290 00:05:23.151 12:57:26 app_cmdline -- common/autotest_common.sh@978 -- # wait 2662290 00:05:23.719 00:05:23.719 real 0m1.350s 00:05:23.719 user 0m1.567s 00:05:23.719 sys 0m0.461s 00:05:23.719 12:57:26 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.719 12:57:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:23.719 ************************************ 00:05:23.719 END TEST app_cmdline 00:05:23.719 ************************************ 00:05:23.719 12:57:26 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:23.719 12:57:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.719 12:57:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.719 12:57:26 -- common/autotest_common.sh@10 -- # set +x 00:05:23.719 ************************************ 00:05:23.719 START TEST version 00:05:23.719 ************************************ 00:05:23.719 12:57:26 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:23.719 * Looking for test storage... 00:05:23.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:23.719 12:57:26 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.719 12:57:26 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.719 12:57:26 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:23.719 12:57:27 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:23.719 12:57:27 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.719 12:57:27 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.719 12:57:27 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.719 12:57:27 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.719 12:57:27 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.719 12:57:27 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.719 12:57:27 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.719 12:57:27 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.719 12:57:27 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.719 12:57:27 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.719 12:57:27 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.719 12:57:27 version -- scripts/common.sh@344 -- # case "$op" in 00:05:23.719 12:57:27 version -- scripts/common.sh@345 -- # : 1 00:05:23.719 12:57:27 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.719 12:57:27 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.719 12:57:27 version -- scripts/common.sh@365 -- # decimal 1 00:05:23.719 12:57:27 version -- scripts/common.sh@353 -- # local d=1 00:05:23.719 12:57:27 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.719 12:57:27 version -- scripts/common.sh@355 -- # echo 1 00:05:23.719 12:57:27 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.719 12:57:27 version -- scripts/common.sh@366 -- # decimal 2 00:05:23.719 12:57:27 version -- scripts/common.sh@353 -- # local d=2 00:05:23.719 12:57:27 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.719 12:57:27 version -- scripts/common.sh@355 -- # echo 2 00:05:23.719 12:57:27 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.719 12:57:27 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.719 12:57:27 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.719 12:57:27 version -- scripts/common.sh@368 -- # return 0 00:05:23.719 12:57:27 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.719 12:57:27 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:23.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.719 --rc genhtml_branch_coverage=1 00:05:23.719 --rc genhtml_function_coverage=1 00:05:23.719 --rc genhtml_legend=1 00:05:23.719 --rc geninfo_all_blocks=1 00:05:23.719 --rc geninfo_unexecuted_blocks=1 00:05:23.719 00:05:23.719 ' 00:05:23.719 12:57:27 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:23.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.719 --rc genhtml_branch_coverage=1 00:05:23.719 --rc genhtml_function_coverage=1 00:05:23.719 --rc genhtml_legend=1 00:05:23.719 --rc geninfo_all_blocks=1 00:05:23.719 --rc geninfo_unexecuted_blocks=1 00:05:23.719 00:05:23.719 ' 00:05:23.719 12:57:27 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:23.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.719 --rc genhtml_branch_coverage=1 00:05:23.719 --rc genhtml_function_coverage=1 00:05:23.719 --rc genhtml_legend=1 00:05:23.719 --rc geninfo_all_blocks=1 00:05:23.719 --rc geninfo_unexecuted_blocks=1 00:05:23.719 00:05:23.719 ' 00:05:23.719 12:57:27 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:23.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.719 --rc genhtml_branch_coverage=1 00:05:23.719 --rc genhtml_function_coverage=1 00:05:23.719 --rc genhtml_legend=1 00:05:23.719 --rc geninfo_all_blocks=1 00:05:23.720 --rc geninfo_unexecuted_blocks=1 00:05:23.720 00:05:23.720 ' 00:05:23.720 12:57:27 version -- app/version.sh@17 -- # get_header_version major 00:05:23.720 12:57:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:23.720 12:57:27 version -- app/version.sh@14 -- # cut -f2 00:05:23.720 12:57:27 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.720 12:57:27 version -- app/version.sh@17 -- # major=25 00:05:23.720 12:57:27 version -- app/version.sh@18 -- # get_header_version minor 00:05:23.720 12:57:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:23.720 12:57:27 version -- app/version.sh@14 -- # cut -f2 00:05:23.720 12:57:27 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.720 12:57:27 version -- app/version.sh@18 -- # minor=1 00:05:23.720 12:57:27 version -- app/version.sh@19 -- # get_header_version patch 00:05:23.720 12:57:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:23.720 12:57:27 version -- app/version.sh@14 -- # cut -f2 00:05:23.720 12:57:27 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.720 12:57:27 version -- app/version.sh@19 -- # patch=0 00:05:23.720 12:57:27 version -- app/version.sh@20 -- # get_header_version suffix 00:05:23.720 12:57:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:23.720 12:57:27 version -- app/version.sh@14 -- # cut -f2 00:05:23.720 12:57:27 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.979 12:57:27 version -- app/version.sh@20 -- # suffix=-pre 00:05:23.979 12:57:27 version -- app/version.sh@22 -- # version=25.1 00:05:23.979 12:57:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:23.979 12:57:27 version -- app/version.sh@28 -- # version=25.1rc0 00:05:23.979 12:57:27 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:23.979 12:57:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:23.979 12:57:27 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:23.979 12:57:27 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:23.979 00:05:23.979 real 0m0.246s 00:05:23.979 user 0m0.145s 00:05:23.979 sys 0m0.145s 00:05:23.979 12:57:27 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.979 12:57:27 version -- common/autotest_common.sh@10 -- # set +x 00:05:23.979 ************************************ 00:05:23.979 END TEST version 00:05:23.979 ************************************ 00:05:23.979 12:57:27 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:23.979 12:57:27 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:23.979 12:57:27 -- spdk/autotest.sh@194 -- # uname -s 00:05:23.979 12:57:27 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:23.979 12:57:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:23.979 12:57:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:23.979 12:57:27 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:23.979 12:57:27 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:23.979 12:57:27 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:23.980 12:57:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:23.980 12:57:27 -- common/autotest_common.sh@10 -- # set +x 00:05:23.980 12:57:27 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:23.980 12:57:27 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:23.980 12:57:27 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:23.980 12:57:27 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:23.980 12:57:27 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:23.980 12:57:27 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:23.980 12:57:27 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:23.980 12:57:27 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:23.980 12:57:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.980 12:57:27 -- common/autotest_common.sh@10 -- # set +x 00:05:23.980 ************************************ 00:05:23.980 START TEST nvmf_tcp 00:05:23.980 ************************************ 00:05:23.980 12:57:27 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:23.980 * Looking for test storage... 00:05:23.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:23.980 12:57:27 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.980 12:57:27 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.980 12:57:27 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:24.239 12:57:27 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:24.239 12:57:27 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.240 12:57:27 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.240 12:57:27 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.240 12:57:27 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:24.240 12:57:27 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.240 12:57:27 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:24.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.240 --rc genhtml_branch_coverage=1 00:05:24.240 --rc genhtml_function_coverage=1 00:05:24.240 --rc genhtml_legend=1 00:05:24.240 --rc geninfo_all_blocks=1 00:05:24.240 --rc geninfo_unexecuted_blocks=1 00:05:24.240 00:05:24.240 ' 00:05:24.240 12:57:27 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:24.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.240 --rc genhtml_branch_coverage=1 00:05:24.240 --rc genhtml_function_coverage=1 00:05:24.240 --rc genhtml_legend=1 00:05:24.240 --rc geninfo_all_blocks=1 00:05:24.240 --rc geninfo_unexecuted_blocks=1 00:05:24.240 00:05:24.240 ' 00:05:24.240 12:57:27 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:24.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.240 --rc genhtml_branch_coverage=1 00:05:24.240 --rc genhtml_function_coverage=1 00:05:24.240 --rc genhtml_legend=1 00:05:24.240 --rc geninfo_all_blocks=1 00:05:24.240 --rc geninfo_unexecuted_blocks=1 00:05:24.240 00:05:24.240 ' 00:05:24.240 12:57:27 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:24.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.240 --rc genhtml_branch_coverage=1 00:05:24.240 --rc genhtml_function_coverage=1 00:05:24.240 --rc genhtml_legend=1 00:05:24.240 --rc geninfo_all_blocks=1 00:05:24.240 --rc geninfo_unexecuted_blocks=1 00:05:24.240 00:05:24.240 ' 00:05:24.240 12:57:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:24.240 12:57:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:24.240 12:57:27 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:24.240 12:57:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:24.240 12:57:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.240 12:57:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.240 ************************************ 00:05:24.240 START TEST nvmf_target_core 00:05:24.240 ************************************ 00:05:24.240 12:57:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:24.240 * Looking for test storage... 00:05:24.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:24.240 12:57:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:24.240 12:57:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:24.240 12:57:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:24.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.500 --rc genhtml_branch_coverage=1 00:05:24.500 --rc genhtml_function_coverage=1 00:05:24.500 --rc genhtml_legend=1 00:05:24.500 --rc geninfo_all_blocks=1 00:05:24.500 --rc geninfo_unexecuted_blocks=1 00:05:24.500 00:05:24.500 ' 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:24.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.500 --rc genhtml_branch_coverage=1 00:05:24.500 --rc genhtml_function_coverage=1 00:05:24.500 --rc genhtml_legend=1 00:05:24.500 --rc geninfo_all_blocks=1 00:05:24.500 --rc geninfo_unexecuted_blocks=1 00:05:24.500 00:05:24.500 ' 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:24.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.500 --rc genhtml_branch_coverage=1 00:05:24.500 --rc genhtml_function_coverage=1 00:05:24.500 --rc genhtml_legend=1 00:05:24.500 --rc geninfo_all_blocks=1 00:05:24.500 --rc geninfo_unexecuted_blocks=1 00:05:24.500 00:05:24.500 ' 00:05:24.500 12:57:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:24.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.500 --rc genhtml_branch_coverage=1 00:05:24.500 --rc genhtml_function_coverage=1 00:05:24.500 --rc genhtml_legend=1 00:05:24.500 --rc geninfo_all_blocks=1 00:05:24.500 --rc geninfo_unexecuted_blocks=1 00:05:24.500 00:05:24.501 ' 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:24.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:24.501 ************************************ 00:05:24.501 START TEST nvmf_abort 00:05:24.501 ************************************ 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:24.501 * Looking for test storage... 00:05:24.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.501 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:24.761 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:24.761 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.761 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:24.761 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.761 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:24.761 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:24.761 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.761 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:24.761 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.761 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:24.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.762 --rc genhtml_branch_coverage=1 00:05:24.762 --rc genhtml_function_coverage=1 00:05:24.762 --rc genhtml_legend=1 00:05:24.762 --rc geninfo_all_blocks=1 00:05:24.762 --rc geninfo_unexecuted_blocks=1 00:05:24.762 00:05:24.762 ' 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:24.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.762 --rc genhtml_branch_coverage=1 00:05:24.762 --rc genhtml_function_coverage=1 00:05:24.762 --rc genhtml_legend=1 00:05:24.762 --rc geninfo_all_blocks=1 00:05:24.762 --rc geninfo_unexecuted_blocks=1 00:05:24.762 00:05:24.762 ' 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:24.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.762 --rc genhtml_branch_coverage=1 00:05:24.762 --rc genhtml_function_coverage=1 00:05:24.762 --rc genhtml_legend=1 00:05:24.762 --rc geninfo_all_blocks=1 00:05:24.762 --rc geninfo_unexecuted_blocks=1 00:05:24.762 00:05:24.762 ' 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:24.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.762 --rc genhtml_branch_coverage=1 00:05:24.762 --rc genhtml_function_coverage=1 00:05:24.762 --rc genhtml_legend=1 00:05:24.762 --rc geninfo_all_blocks=1 00:05:24.762 --rc geninfo_unexecuted_blocks=1 00:05:24.762 00:05:24.762 ' 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:24.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:24.762 12:57:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:31.333 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:31.333 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:31.333 Found net devices under 0000:86:00.0: cvl_0_0 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:31.333 Found net devices under 0000:86:00.1: cvl_0_1 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:31.333 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:31.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:31.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:05:31.334 00:05:31.334 --- 10.0.0.2 ping statistics --- 00:05:31.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:31.334 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:31.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:31.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:05:31.334 00:05:31.334 --- 10.0.0.1 ping statistics --- 00:05:31.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:31.334 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2665969 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2665969 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2665969 ']' 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.334 12:57:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.334 [2024-11-19 12:57:34.035008] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:31.334 [2024-11-19 12:57:34.035052] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:31.334 [2024-11-19 12:57:34.115056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:31.334 [2024-11-19 12:57:34.156678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:31.334 [2024-11-19 12:57:34.156717] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:31.334 [2024-11-19 12:57:34.156723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:31.334 [2024-11-19 12:57:34.156729] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:31.334 [2024-11-19 12:57:34.156733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:31.334 [2024-11-19 12:57:34.158235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.334 [2024-11-19 12:57:34.158342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.334 [2024-11-19 12:57:34.158343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.593 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.593 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:31.593 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:31.593 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:31.593 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.593 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:31.593 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:31.593 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.593 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.593 [2024-11-19 12:57:34.920150] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:31.593 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.593 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:31.593 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.593 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.593 Malloc0 00:05:31.593 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.593 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:31.593 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.593 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.593 Delay0 00:05:31.594 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.594 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:31.594 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.594 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.852 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.852 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:31.852 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.852 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.852 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.852 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:31.852 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.852 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.852 [2024-11-19 12:57:34.991534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:31.852 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.852 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:31.852 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.852 12:57:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.852 12:57:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.852 12:57:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:31.852 [2024-11-19 12:57:35.170036] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:34.385 Initializing NVMe Controllers 00:05:34.385 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:34.385 controller IO queue size 128 less than required 00:05:34.385 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:34.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:34.385 Initialization complete. Launching workers. 00:05:34.385 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 36631 00:05:34.385 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36696, failed to submit 62 00:05:34.385 success 36635, unsuccessful 61, failed 0 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:34.385 rmmod nvme_tcp 00:05:34.385 rmmod nvme_fabrics 00:05:34.385 rmmod nvme_keyring 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2665969 ']' 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2665969 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2665969 ']' 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2665969 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2665969 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2665969' 00:05:34.385 killing process with pid 2665969 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2665969 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2665969 00:05:34.385 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:34.386 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:34.386 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:34.386 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:34.386 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:34.386 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:34.386 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:34.386 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:34.386 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:34.386 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:34.386 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:34.386 12:57:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:36.291 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:36.291 00:05:36.291 real 0m11.906s 00:05:36.291 user 0m13.820s 00:05:36.291 sys 0m5.442s 00:05:36.291 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.291 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:36.291 ************************************ 00:05:36.291 END TEST nvmf_abort 00:05:36.291 ************************************ 00:05:36.291 12:57:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:36.291 12:57:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:36.291 12:57:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.291 12:57:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:36.551 ************************************ 00:05:36.551 START TEST nvmf_ns_hotplug_stress 00:05:36.551 ************************************ 00:05:36.551 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:36.551 * Looking for test storage... 00:05:36.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:36.551 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:36.551 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:36.551 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:36.551 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:36.551 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.551 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.551 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.551 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.551 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.551 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.551 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.551 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.551 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.551 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.551 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.551 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:36.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.552 --rc genhtml_branch_coverage=1 00:05:36.552 --rc genhtml_function_coverage=1 00:05:36.552 --rc genhtml_legend=1 00:05:36.552 --rc geninfo_all_blocks=1 00:05:36.552 --rc geninfo_unexecuted_blocks=1 00:05:36.552 00:05:36.552 ' 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:36.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.552 --rc genhtml_branch_coverage=1 00:05:36.552 --rc genhtml_function_coverage=1 00:05:36.552 --rc genhtml_legend=1 00:05:36.552 --rc geninfo_all_blocks=1 00:05:36.552 --rc geninfo_unexecuted_blocks=1 00:05:36.552 00:05:36.552 ' 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:36.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.552 --rc genhtml_branch_coverage=1 00:05:36.552 --rc genhtml_function_coverage=1 00:05:36.552 --rc genhtml_legend=1 00:05:36.552 --rc geninfo_all_blocks=1 00:05:36.552 --rc geninfo_unexecuted_blocks=1 00:05:36.552 00:05:36.552 ' 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:36.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.552 --rc genhtml_branch_coverage=1 00:05:36.552 --rc genhtml_function_coverage=1 00:05:36.552 --rc genhtml_legend=1 00:05:36.552 --rc geninfo_all_blocks=1 00:05:36.552 --rc geninfo_unexecuted_blocks=1 00:05:36.552 00:05:36.552 ' 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:36.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:36.552 12:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:43.126 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:43.127 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:43.127 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:43.127 Found net devices under 0000:86:00.0: cvl_0_0 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:43.127 Found net devices under 0000:86:00.1: cvl_0_1 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:43.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:43.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:05:43.127 00:05:43.127 --- 10.0.0.2 ping statistics --- 00:05:43.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:43.127 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:43.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:43.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:05:43.127 00:05:43.127 --- 10.0.0.1 ping statistics --- 00:05:43.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:43.127 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2670005 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2670005 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2670005 ']' 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.127 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.128 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.128 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.128 12:57:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:43.128 [2024-11-19 12:57:45.965908] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:43.128 [2024-11-19 12:57:45.965968] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:43.128 [2024-11-19 12:57:46.043609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:43.128 [2024-11-19 12:57:46.084194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:43.128 [2024-11-19 12:57:46.084233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:43.128 [2024-11-19 12:57:46.084240] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:43.128 [2024-11-19 12:57:46.084246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:43.128 [2024-11-19 12:57:46.084253] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:43.128 [2024-11-19 12:57:46.085710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.128 [2024-11-19 12:57:46.086632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.128 [2024-11-19 12:57:46.086632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.128 12:57:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.128 12:57:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:43.128 12:57:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:43.128 12:57:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:43.128 12:57:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:43.128 12:57:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:43.128 12:57:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:43.128 12:57:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:43.128 [2024-11-19 12:57:46.399530] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:43.128 12:57:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:43.387 12:57:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:43.645 [2024-11-19 12:57:46.776923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:43.645 12:57:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:43.645 12:57:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:43.904 Malloc0 00:05:43.904 12:57:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:44.164 Delay0 00:05:44.164 12:57:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.422 12:57:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:44.681 NULL1 00:05:44.681 12:57:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:44.681 12:57:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:44.681 12:57:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2670487 00:05:44.681 12:57:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:05:44.681 12:57:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.940 12:57:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.198 12:57:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:45.198 12:57:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:45.456 true 00:05:45.456 12:57:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:05:45.456 12:57:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.715 12:57:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.715 12:57:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:45.715 12:57:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:45.973 true 00:05:45.973 12:57:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:05:45.973 12:57:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.348 Read completed with error (sct=0, sc=11) 00:05:47.348 12:57:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.348 12:57:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:47.348 12:57:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:47.348 true 00:05:47.348 12:57:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:05:47.348 12:57:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.607 12:57:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.865 12:57:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:47.866 12:57:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:48.124 true 00:05:48.124 12:57:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:05:48.124 12:57:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.383 12:57:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.383 12:57:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:48.383 12:57:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:48.641 true 00:05:48.641 12:57:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:05:48.641 12:57:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.900 12:57:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.158 12:57:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:49.158 12:57:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:49.158 true 00:05:49.158 12:57:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:05:49.158 12:57:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.550 12:57:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.550 12:57:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:50.550 12:57:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:50.550 true 00:05:50.550 12:57:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:05:50.550 12:57:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.808 12:57:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.067 12:57:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:51.067 12:57:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:51.326 true 00:05:51.326 12:57:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:05:51.326 12:57:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.261 12:57:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.520 12:57:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:52.520 12:57:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:52.778 true 00:05:52.778 12:57:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:05:52.778 12:57:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.712 12:57:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.712 12:57:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:53.712 12:57:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:53.970 true 00:05:53.970 12:57:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:05:53.970 12:57:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.228 12:57:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.228 12:57:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:54.228 12:57:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:54.487 true 00:05:54.487 12:57:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:05:54.487 12:57:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.421 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.421 12:57:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.679 12:57:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:55.679 12:57:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:55.937 true 00:05:55.937 12:57:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:05:55.937 12:57:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.195 12:57:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.453 12:57:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:56.453 12:57:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:56.453 true 00:05:56.453 12:57:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:05:56.453 12:57:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.832 12:58:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.832 12:58:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:57.832 12:58:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:58.090 true 00:05:58.090 12:58:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:05:58.090 12:58:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.025 12:58:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.025 12:58:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:59.025 12:58:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:59.025 true 00:05:59.284 12:58:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:05:59.284 12:58:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.284 12:58:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.542 12:58:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:59.542 12:58:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:59.801 true 00:05:59.801 12:58:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:05:59.801 12:58:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.736 12:58:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.995 12:58:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:00.995 12:58:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:01.253 true 00:06:01.253 12:58:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:06:01.253 12:58:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.188 12:58:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.188 12:58:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:02.188 12:58:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:02.447 true 00:06:02.447 12:58:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:06:02.447 12:58:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.706 12:58:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.706 12:58:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:02.706 12:58:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:02.965 true 00:06:02.965 12:58:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:06:02.965 12:58:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.343 12:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.343 12:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:04.343 12:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:04.343 true 00:06:04.602 12:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:06:04.602 12:58:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.168 12:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.427 12:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:05.427 12:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:05.685 true 00:06:05.685 12:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:06:05.685 12:58:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.952 12:58:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.952 12:58:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:05.952 12:58:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:06.212 true 00:06:06.212 12:58:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:06:06.212 12:58:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.687 12:58:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.687 12:58:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:07.687 12:58:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:08.022 true 00:06:08.022 12:58:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:06:08.022 12:58:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.022 12:58:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.281 12:58:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:08.281 12:58:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:08.281 true 00:06:08.540 12:58:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:06:08.540 12:58:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.476 12:58:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.735 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.735 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.735 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.735 12:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:09.735 12:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:09.993 true 00:06:09.993 12:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:06:09.993 12:58:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.929 12:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.929 12:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:10.929 12:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:11.188 true 00:06:11.188 12:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:06:11.188 12:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.447 12:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.705 12:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:11.705 12:58:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:11.705 true 00:06:11.705 12:58:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:06:11.705 12:58:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.081 12:58:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.081 12:58:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:13.081 12:58:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:13.339 true 00:06:13.339 12:58:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:06:13.339 12:58:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.274 12:58:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.274 12:58:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:14.274 12:58:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:14.533 true 00:06:14.533 12:58:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:06:14.533 12:58:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.791 12:58:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.791 12:58:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:14.791 12:58:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:15.050 true 00:06:15.050 12:58:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:06:15.050 12:58:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.985 12:58:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.244 Initializing NVMe Controllers 00:06:16.244 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:16.244 Controller IO queue size 128, less than required. 00:06:16.244 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:16.244 Controller IO queue size 128, less than required. 00:06:16.244 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:16.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:16.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:16.244 Initialization complete. Launching workers. 00:06:16.244 ======================================================== 00:06:16.244 Latency(us) 00:06:16.244 Device Information : IOPS MiB/s Average min max 00:06:16.244 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1470.47 0.72 55656.13 3043.82 1019144.63 00:06:16.244 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15506.22 7.57 8233.85 1626.88 457565.07 00:06:16.244 ======================================================== 00:06:16.244 Total : 16976.69 8.29 12341.41 1626.88 1019144.63 00:06:16.244 00:06:16.244 12:58:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:16.244 12:58:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:16.502 true 00:06:16.502 12:58:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2670487 00:06:16.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2670487) - No such process 00:06:16.502 12:58:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2670487 00:06:16.502 12:58:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.761 12:58:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:17.019 12:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:17.019 12:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:17.019 12:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:17.019 12:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.019 12:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:17.019 null0 00:06:17.019 12:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.019 12:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.019 12:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:17.278 null1 00:06:17.278 12:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.278 12:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.278 12:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:17.536 null2 00:06:17.536 12:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.536 12:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.536 12:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:17.536 null3 00:06:17.795 12:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.795 12:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.795 12:58:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:17.795 null4 00:06:17.795 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.795 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.795 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:18.053 null5 00:06:18.053 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:18.053 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:18.053 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:18.312 null6 00:06:18.312 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:18.312 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:18.312 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:18.572 null7 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.572 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:18.573 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:18.573 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:18.573 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.573 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.573 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.573 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.573 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.573 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2676230 2676232 2676235 2676239 2676242 2676245 2676248 2676250 00:06:18.573 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:18.573 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:18.573 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:18.573 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.573 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.573 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:18.573 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:18.573 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:18.573 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.832 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:18.832 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:18.832 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:18.832 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:18.832 12:58:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:18.832 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.832 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.832 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:18.832 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.832 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.832 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:18.832 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.832 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.832 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:18.832 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.832 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.832 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.832 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.832 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:18.832 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:18.832 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.832 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.832 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:18.832 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.832 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.832 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:18.832 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.832 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.832 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:19.092 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:19.092 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:19.092 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:19.092 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:19.092 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.092 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:19.092 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:19.092 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:19.350 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.350 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.350 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:19.350 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.350 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.350 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:19.350 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.350 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.350 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:19.350 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.350 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.350 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.350 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.350 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:19.350 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:19.350 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.350 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.350 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.350 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:19.350 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.350 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:19.350 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.350 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.350 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:19.609 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.609 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:19.609 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:19.609 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:19.609 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:19.609 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:19.609 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:19.609 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:19.868 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.868 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.868 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:19.868 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.868 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.868 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:19.868 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.868 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.868 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:19.868 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.868 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.868 12:58:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:19.868 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.868 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.868 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:19.868 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.868 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.868 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:19.868 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.868 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.868 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:19.868 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.868 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.868 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:19.868 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.868 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:19.868 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:19.868 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:19.868 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:19.868 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:19.868 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:19.868 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:20.127 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.127 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.127 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:20.127 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.127 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.127 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:20.127 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.127 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.127 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:20.127 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.127 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.127 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:20.127 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.127 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.127 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:20.127 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.127 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.127 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:20.127 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.127 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.127 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:20.127 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.128 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.128 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:20.386 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.386 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:20.386 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:20.386 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:20.386 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:20.386 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:20.386 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:20.386 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:20.645 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.645 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.645 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:20.645 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.645 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.645 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:20.645 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.645 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.645 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:20.645 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.646 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.646 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:20.646 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.646 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.646 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:20.646 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.646 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.646 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:20.646 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.646 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.646 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:20.646 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.646 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.646 12:58:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:20.905 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:20.905 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:20.905 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.905 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:20.905 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:20.905 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:20.905 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:20.905 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:20.905 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.905 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.905 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:20.905 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.905 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.906 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:20.906 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.906 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.906 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.906 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.906 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:20.906 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:21.165 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.165 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.165 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.165 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:21.165 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.165 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:21.165 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.165 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.165 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.165 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.165 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:21.165 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:21.165 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:21.165 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:21.165 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:21.165 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.165 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:21.165 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:21.165 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:21.165 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:21.424 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.424 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.424 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:21.424 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.424 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.424 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:21.424 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.424 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.424 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:21.424 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.424 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.424 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.424 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:21.424 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.424 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:21.424 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.424 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.424 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:21.424 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.424 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.424 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:21.424 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.425 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.425 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:21.684 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.684 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:21.684 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:21.684 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:21.684 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:21.684 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:21.684 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:21.684 12:58:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:21.942 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.942 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.942 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:21.942 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.942 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.942 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:21.943 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.943 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.943 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:21.943 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.943 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.943 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:21.943 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.943 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.943 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:21.943 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.943 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.943 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:21.943 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.943 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.943 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:21.943 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.943 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.943 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:21.943 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:21.943 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.202 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:22.461 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:22.461 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:22.461 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:22.461 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:22.461 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:22.461 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.461 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:22.461 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:22.720 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.720 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.720 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.720 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.720 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.720 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.720 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.720 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.720 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.720 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.720 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.720 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.720 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.720 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.720 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.720 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.720 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:22.720 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:22.720 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:22.720 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:22.720 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:22.720 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:22.720 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:22.720 12:58:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:22.720 rmmod nvme_tcp 00:06:22.720 rmmod nvme_fabrics 00:06:22.720 rmmod nvme_keyring 00:06:22.720 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:22.720 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:22.720 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:22.720 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2670005 ']' 00:06:22.720 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2670005 00:06:22.720 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2670005 ']' 00:06:22.720 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2670005 00:06:22.720 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:22.720 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.720 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2670005 00:06:22.720 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:22.720 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:22.720 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2670005' 00:06:22.720 killing process with pid 2670005 00:06:22.720 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2670005 00:06:22.720 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2670005 00:06:22.980 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:22.980 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:22.980 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:22.980 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:22.980 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:22.980 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:22.980 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:22.980 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:22.980 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:22.980 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:22.980 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:22.980 12:58:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:25.516 00:06:25.516 real 0m48.645s 00:06:25.516 user 3m18.516s 00:06:25.516 sys 0m15.509s 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:25.516 ************************************ 00:06:25.516 END TEST nvmf_ns_hotplug_stress 00:06:25.516 ************************************ 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:25.516 ************************************ 00:06:25.516 START TEST nvmf_delete_subsystem 00:06:25.516 ************************************ 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:25.516 * Looking for test storage... 00:06:25.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:25.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.516 --rc genhtml_branch_coverage=1 00:06:25.516 --rc genhtml_function_coverage=1 00:06:25.516 --rc genhtml_legend=1 00:06:25.516 --rc geninfo_all_blocks=1 00:06:25.516 --rc geninfo_unexecuted_blocks=1 00:06:25.516 00:06:25.516 ' 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:25.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.516 --rc genhtml_branch_coverage=1 00:06:25.516 --rc genhtml_function_coverage=1 00:06:25.516 --rc genhtml_legend=1 00:06:25.516 --rc geninfo_all_blocks=1 00:06:25.516 --rc geninfo_unexecuted_blocks=1 00:06:25.516 00:06:25.516 ' 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:25.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.516 --rc genhtml_branch_coverage=1 00:06:25.516 --rc genhtml_function_coverage=1 00:06:25.516 --rc genhtml_legend=1 00:06:25.516 --rc geninfo_all_blocks=1 00:06:25.516 --rc geninfo_unexecuted_blocks=1 00:06:25.516 00:06:25.516 ' 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:25.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.516 --rc genhtml_branch_coverage=1 00:06:25.516 --rc genhtml_function_coverage=1 00:06:25.516 --rc genhtml_legend=1 00:06:25.516 --rc geninfo_all_blocks=1 00:06:25.516 --rc geninfo_unexecuted_blocks=1 00:06:25.516 00:06:25.516 ' 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.516 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:25.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:25.517 12:58:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:32.085 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:32.086 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:32.086 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:32.086 Found net devices under 0000:86:00.0: cvl_0_0 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:32.086 Found net devices under 0000:86:00.1: cvl_0_1 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:32.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:32.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:06:32.086 00:06:32.086 --- 10.0.0.2 ping statistics --- 00:06:32.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.086 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:32.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:32.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:06:32.086 00:06:32.086 --- 10.0.0.1 ping statistics --- 00:06:32.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.086 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2680729 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2680729 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2680729 ']' 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.086 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.086 [2024-11-19 12:58:34.627236] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:32.086 [2024-11-19 12:58:34.627280] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.086 [2024-11-19 12:58:34.706630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.086 [2024-11-19 12:58:34.748374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:32.087 [2024-11-19 12:58:34.748413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:32.087 [2024-11-19 12:58:34.748420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:32.087 [2024-11-19 12:58:34.748426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:32.087 [2024-11-19 12:58:34.748431] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:32.087 [2024-11-19 12:58:34.749638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.087 [2024-11-19 12:58:34.749639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.087 [2024-11-19 12:58:34.885934] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.087 [2024-11-19 12:58:34.906129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.087 NULL1 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.087 Delay0 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2680750 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:32.087 12:58:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:32.087 [2024-11-19 12:58:35.017873] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:33.988 12:58:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:33.988 12:58:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.988 12:58:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 Read completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 Write completed with error (sct=0, sc=8) 00:06:33.988 starting I/O failed: -6 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 starting I/O failed: -6 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 starting I/O failed: -6 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 starting I/O failed: -6 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 [2024-11-19 12:58:37.267489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c64a0 is same with the state(6) to be set 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 starting I/O failed: -6 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 starting I/O failed: -6 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 starting I/O failed: -6 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 starting I/O failed: -6 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 starting I/O failed: -6 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 starting I/O failed: -6 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 starting I/O failed: -6 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 starting I/O failed: -6 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 starting I/O failed: -6 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 starting I/O failed: -6 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 Write completed with error (sct=0, sc=8) 00:06:33.989 starting I/O failed: -6 00:06:33.989 Read completed with error (sct=0, sc=8) 00:06:33.989 [2024-11-19 12:58:37.268051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f95e800d4d0 is same with the state(6) to be set 00:06:34.924 [2024-11-19 12:58:38.237986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c79a0 is same with the state(6) to be set 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 [2024-11-19 12:58:38.270004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f95e800d020 is same with the state(6) to be set 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 [2024-11-19 12:58:38.270496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f95e8000c40 is same with the state(6) to be set 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 [2024-11-19 12:58:38.270668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c6680 is same with the state(6) to be set 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 Read completed with error (sct=0, sc=8) 00:06:34.924 Write completed with error (sct=0, sc=8) 00:06:34.924 [2024-11-19 12:58:38.271189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f95e800d800 is same with the state(6) to be set 00:06:34.924 Initializing NVMe Controllers 00:06:34.924 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:34.924 Controller IO queue size 128, less than required. 00:06:34.924 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:34.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:34.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:34.924 Initialization complete. Launching workers. 00:06:34.924 ======================================================== 00:06:34.924 Latency(us) 00:06:34.924 Device Information : IOPS MiB/s Average min max 00:06:34.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.46 0.08 853978.97 379.55 2002306.56 00:06:34.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.49 0.08 1113796.05 628.71 2002644.92 00:06:34.924 ======================================================== 00:06:34.924 Total : 331.95 0.16 981942.77 379.55 2002644.92 00:06:34.924 00:06:34.924 [2024-11-19 12:58:38.271850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c79a0 (9): Bad file descriptor 00:06:34.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:34.924 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.924 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:34.924 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2680750 00:06:34.924 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2680750 00:06:35.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2680750) - No such process 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2680750 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2680750 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2680750 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:35.490 [2024-11-19 12:58:38.799872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2681446 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2681446 00:06:35.490 12:58:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:35.747 [2024-11-19 12:58:38.892013] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:36.005 12:58:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:36.005 12:58:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2681446 00:06:36.005 12:58:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:36.571 12:58:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:36.571 12:58:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2681446 00:06:36.571 12:58:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:37.137 12:58:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:37.137 12:58:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2681446 00:06:37.137 12:58:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:37.704 12:58:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:37.705 12:58:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2681446 00:06:37.705 12:58:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:37.963 12:58:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:37.963 12:58:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2681446 00:06:37.963 12:58:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:38.530 12:58:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:38.530 12:58:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2681446 00:06:38.530 12:58:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:38.789 Initializing NVMe Controllers 00:06:38.789 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:38.789 Controller IO queue size 128, less than required. 00:06:38.789 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:38.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:38.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:38.789 Initialization complete. Launching workers. 00:06:38.789 ======================================================== 00:06:38.789 Latency(us) 00:06:38.789 Device Information : IOPS MiB/s Average min max 00:06:38.789 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002428.16 1000141.46 1041457.72 00:06:38.789 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003433.44 1000165.31 1010052.07 00:06:38.789 ======================================================== 00:06:38.789 Total : 256.00 0.12 1002930.80 1000141.46 1041457.72 00:06:38.789 00:06:39.047 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:39.047 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2681446 00:06:39.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2681446) - No such process 00:06:39.047 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2681446 00:06:39.047 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:39.047 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:39.047 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:39.047 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:39.047 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:39.047 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:39.047 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:39.047 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:39.047 rmmod nvme_tcp 00:06:39.047 rmmod nvme_fabrics 00:06:39.047 rmmod nvme_keyring 00:06:39.047 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:39.047 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:39.047 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:39.047 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2680729 ']' 00:06:39.047 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2680729 00:06:39.047 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2680729 ']' 00:06:39.047 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2680729 00:06:39.047 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:39.305 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.305 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2680729 00:06:39.305 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.305 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.305 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2680729' 00:06:39.305 killing process with pid 2680729 00:06:39.305 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2680729 00:06:39.305 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2680729 00:06:39.305 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:39.305 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:39.305 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:39.305 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:39.305 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:39.305 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:39.305 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:39.305 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:39.305 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:39.305 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:39.305 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:39.305 12:58:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.841 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:41.841 00:06:41.841 real 0m16.299s 00:06:41.841 user 0m29.612s 00:06:41.841 sys 0m5.503s 00:06:41.841 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.841 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:41.841 ************************************ 00:06:41.841 END TEST nvmf_delete_subsystem 00:06:41.841 ************************************ 00:06:41.841 12:58:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:41.841 12:58:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:41.841 12:58:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.841 12:58:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:41.841 ************************************ 00:06:41.841 START TEST nvmf_host_management 00:06:41.841 ************************************ 00:06:41.841 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:41.841 * Looking for test storage... 00:06:41.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:41.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.842 --rc genhtml_branch_coverage=1 00:06:41.842 --rc genhtml_function_coverage=1 00:06:41.842 --rc genhtml_legend=1 00:06:41.842 --rc geninfo_all_blocks=1 00:06:41.842 --rc geninfo_unexecuted_blocks=1 00:06:41.842 00:06:41.842 ' 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:41.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.842 --rc genhtml_branch_coverage=1 00:06:41.842 --rc genhtml_function_coverage=1 00:06:41.842 --rc genhtml_legend=1 00:06:41.842 --rc geninfo_all_blocks=1 00:06:41.842 --rc geninfo_unexecuted_blocks=1 00:06:41.842 00:06:41.842 ' 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:41.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.842 --rc genhtml_branch_coverage=1 00:06:41.842 --rc genhtml_function_coverage=1 00:06:41.842 --rc genhtml_legend=1 00:06:41.842 --rc geninfo_all_blocks=1 00:06:41.842 --rc geninfo_unexecuted_blocks=1 00:06:41.842 00:06:41.842 ' 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:41.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.842 --rc genhtml_branch_coverage=1 00:06:41.842 --rc genhtml_function_coverage=1 00:06:41.842 --rc genhtml_legend=1 00:06:41.842 --rc geninfo_all_blocks=1 00:06:41.842 --rc geninfo_unexecuted_blocks=1 00:06:41.842 00:06:41.842 ' 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:41.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:41.842 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:41.843 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:41.843 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:41.843 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:41.843 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:41.843 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:41.843 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:41.843 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:41.843 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:41.843 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:41.843 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:41.843 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:41.843 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.843 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:41.843 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:41.843 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:41.843 12:58:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:48.410 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:48.410 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:48.411 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:48.411 Found net devices under 0000:86:00.0: cvl_0_0 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:48.411 Found net devices under 0000:86:00.1: cvl_0_1 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:48.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:48.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:06:48.411 00:06:48.411 --- 10.0.0.2 ping statistics --- 00:06:48.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.411 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:48.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:48.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:06:48.411 00:06:48.411 --- 10.0.0.1 ping statistics --- 00:06:48.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.411 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:48.411 12:58:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:48.411 12:58:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:48.411 12:58:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:48.411 12:58:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:48.411 12:58:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:48.411 12:58:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:48.411 12:58:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.411 12:58:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2685652 00:06:48.412 12:58:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2685652 00:06:48.412 12:58:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:48.412 12:58:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2685652 ']' 00:06:48.412 12:58:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.412 12:58:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.412 12:58:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.412 12:58:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.412 12:58:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.412 [2024-11-19 12:58:51.072423] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:48.412 [2024-11-19 12:58:51.072476] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.412 [2024-11-19 12:58:51.150628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.412 [2024-11-19 12:58:51.193072] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:48.412 [2024-11-19 12:58:51.193111] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:48.412 [2024-11-19 12:58:51.193118] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:48.412 [2024-11-19 12:58:51.193125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:48.412 [2024-11-19 12:58:51.193131] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:48.412 [2024-11-19 12:58:51.194725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.412 [2024-11-19 12:58:51.194841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.412 [2024-11-19 12:58:51.194929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.412 [2024-11-19 12:58:51.194930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:48.670 12:58:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.670 12:58:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:48.670 12:58:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:48.670 12:58:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:48.670 12:58:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.670 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:48.670 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:48.671 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.671 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.671 [2024-11-19 12:58:52.034043] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:48.671 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.671 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:48.671 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:48.671 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.671 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:48.929 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.930 Malloc0 00:06:48.930 [2024-11-19 12:58:52.106383] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2685790 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2685790 /var/tmp/bdevperf.sock 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2685790 ']' 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:48.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:48.930 { 00:06:48.930 "params": { 00:06:48.930 "name": "Nvme$subsystem", 00:06:48.930 "trtype": "$TEST_TRANSPORT", 00:06:48.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:48.930 "adrfam": "ipv4", 00:06:48.930 "trsvcid": "$NVMF_PORT", 00:06:48.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:48.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:48.930 "hdgst": ${hdgst:-false}, 00:06:48.930 "ddgst": ${ddgst:-false} 00:06:48.930 }, 00:06:48.930 "method": "bdev_nvme_attach_controller" 00:06:48.930 } 00:06:48.930 EOF 00:06:48.930 )") 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:48.930 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:48.930 "params": { 00:06:48.930 "name": "Nvme0", 00:06:48.930 "trtype": "tcp", 00:06:48.930 "traddr": "10.0.0.2", 00:06:48.930 "adrfam": "ipv4", 00:06:48.930 "trsvcid": "4420", 00:06:48.930 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:48.930 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:48.930 "hdgst": false, 00:06:48.930 "ddgst": false 00:06:48.930 }, 00:06:48.930 "method": "bdev_nvme_attach_controller" 00:06:48.930 }' 00:06:48.930 [2024-11-19 12:58:52.202654] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:48.930 [2024-11-19 12:58:52.202705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2685790 ] 00:06:48.930 [2024-11-19 12:58:52.280866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.189 [2024-11-19 12:58:52.323248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.189 Running I/O for 10 seconds... 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=105 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 105 -ge 100 ']' 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:49.450 [2024-11-19 12:58:52.624087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 [2024-11-19 12:58:52.624356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21de200 is same with the state(6) to be set 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.450 [2024-11-19 12:58:52.629838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:49.450 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:49.451 [2024-11-19 12:58:52.629870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.629881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:49.451 [2024-11-19 12:58:52.629888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.629895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:49.451 [2024-11-19 12:58:52.629902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.629909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:49.451 [2024-11-19 12:58:52.629916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.629923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c79500 is same with the state(6) to be set 00:06:49.451 [2024-11-19 12:58:52.636604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.636626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.636643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.636650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.451 [2024-11-19 12:58:52.636658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.636666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.636674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.636681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.636689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.636695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.636704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.636710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.636718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.636725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.636733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.636740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.636748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.636755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.636763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.636769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.636777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.636784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.636792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.636798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.636807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.636813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.636821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.636831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.636839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.636846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.636854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.636861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.636869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.636875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.636884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.636898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.636907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.636915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.636924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.636931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.636939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.636946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.636960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.636966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.636975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.636981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.636990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.636996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 12:58:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:49.451 [2024-11-19 12:58:52.637005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.637012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.637021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.637029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.637038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.637045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.637053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.637060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.637068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.637076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.637084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.637091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.637100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.637106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.637116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.637123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.637131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.637138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.637147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.451 [2024-11-19 12:58:52.637155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.451 [2024-11-19 12:58:52.637163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.637604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.452 [2024-11-19 12:58:52.637611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.452 [2024-11-19 12:58:52.638575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:49.452 task offset: 24320 on job bdev=Nvme0n1 fails 00:06:49.452 00:06:49.452 Latency(us) 00:06:49.452 [2024-11-19T11:58:52.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:49.452 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:49.452 Job: Nvme0n1 ended in about 0.11 seconds with error 00:06:49.452 Verification LBA range: start 0x0 length 0x400 00:06:49.452 Nvme0n1 : 0.11 1688.80 105.55 568.86 0.00 26149.70 1339.21 28265.96 00:06:49.452 [2024-11-19T11:58:52.829Z] =================================================================================================================== 00:06:49.452 [2024-11-19T11:58:52.829Z] Total : 1688.80 105.55 568.86 0.00 26149.70 1339.21 28265.96 00:06:49.452 [2024-11-19 12:58:52.640974] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:49.452 [2024-11-19 12:58:52.640993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c79500 (9): Bad file descriptor 00:06:49.452 [2024-11-19 12:58:52.649966] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:50.387 12:58:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2685790 00:06:50.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2685790) - No such process 00:06:50.387 12:58:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:50.387 12:58:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:50.387 12:58:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:50.387 12:58:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:50.387 12:58:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:50.387 12:58:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:50.387 12:58:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:50.387 12:58:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:50.387 { 00:06:50.387 "params": { 00:06:50.387 "name": "Nvme$subsystem", 00:06:50.387 "trtype": "$TEST_TRANSPORT", 00:06:50.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:50.387 "adrfam": "ipv4", 00:06:50.387 "trsvcid": "$NVMF_PORT", 00:06:50.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:50.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:50.387 "hdgst": ${hdgst:-false}, 00:06:50.387 "ddgst": ${ddgst:-false} 00:06:50.387 }, 00:06:50.387 "method": "bdev_nvme_attach_controller" 00:06:50.387 } 00:06:50.387 EOF 00:06:50.387 )") 00:06:50.387 12:58:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:50.387 12:58:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:50.387 12:58:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:50.387 12:58:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:50.387 "params": { 00:06:50.387 "name": "Nvme0", 00:06:50.387 "trtype": "tcp", 00:06:50.387 "traddr": "10.0.0.2", 00:06:50.387 "adrfam": "ipv4", 00:06:50.387 "trsvcid": "4420", 00:06:50.387 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:50.387 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:50.387 "hdgst": false, 00:06:50.387 "ddgst": false 00:06:50.387 }, 00:06:50.387 "method": "bdev_nvme_attach_controller" 00:06:50.387 }' 00:06:50.388 [2024-11-19 12:58:53.692091] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:50.388 [2024-11-19 12:58:53.692152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2686134 ] 00:06:50.646 [2024-11-19 12:58:53.768155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.646 [2024-11-19 12:58:53.808254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.646 Running I/O for 1 seconds... 00:06:52.023 1984.00 IOPS, 124.00 MiB/s 00:06:52.023 Latency(us) 00:06:52.023 [2024-11-19T11:58:55.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:52.023 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:52.023 Verification LBA range: start 0x0 length 0x400 00:06:52.023 Nvme0n1 : 1.02 2004.22 125.26 0.00 0.00 31428.67 4986.43 27924.03 00:06:52.023 [2024-11-19T11:58:55.400Z] =================================================================================================================== 00:06:52.023 [2024-11-19T11:58:55.400Z] Total : 2004.22 125.26 0.00 0.00 31428.67 4986.43 27924.03 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:52.023 rmmod nvme_tcp 00:06:52.023 rmmod nvme_fabrics 00:06:52.023 rmmod nvme_keyring 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2685652 ']' 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2685652 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2685652 ']' 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2685652 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2685652 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2685652' 00:06:52.023 killing process with pid 2685652 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2685652 00:06:52.023 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2685652 00:06:52.282 [2024-11-19 12:58:55.472886] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:52.282 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:52.282 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:52.282 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:52.282 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:52.283 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:52.283 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:52.283 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:52.283 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:52.283 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:52.283 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.283 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:52.283 12:58:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.190 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:54.449 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:54.449 00:06:54.449 real 0m12.793s 00:06:54.449 user 0m21.000s 00:06:54.449 sys 0m5.612s 00:06:54.449 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.449 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:54.449 ************************************ 00:06:54.449 END TEST nvmf_host_management 00:06:54.449 ************************************ 00:06:54.449 12:58:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:54.449 12:58:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:54.449 12:58:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.449 12:58:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:54.449 ************************************ 00:06:54.449 START TEST nvmf_lvol 00:06:54.449 ************************************ 00:06:54.449 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:54.449 * Looking for test storage... 00:06:54.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:54.449 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:54.449 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:54.449 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:54.449 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:54.449 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.449 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.449 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.449 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.449 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.449 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.449 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.450 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.450 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.450 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.450 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.450 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:54.450 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:54.450 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.450 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.450 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:54.450 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:54.450 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.450 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:54.450 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.450 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:54.450 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:54.450 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.450 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:54.450 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.450 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.450 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.450 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:54.450 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:54.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.710 --rc genhtml_branch_coverage=1 00:06:54.710 --rc genhtml_function_coverage=1 00:06:54.710 --rc genhtml_legend=1 00:06:54.710 --rc geninfo_all_blocks=1 00:06:54.710 --rc geninfo_unexecuted_blocks=1 00:06:54.710 00:06:54.710 ' 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:54.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.710 --rc genhtml_branch_coverage=1 00:06:54.710 --rc genhtml_function_coverage=1 00:06:54.710 --rc genhtml_legend=1 00:06:54.710 --rc geninfo_all_blocks=1 00:06:54.710 --rc geninfo_unexecuted_blocks=1 00:06:54.710 00:06:54.710 ' 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:54.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.710 --rc genhtml_branch_coverage=1 00:06:54.710 --rc genhtml_function_coverage=1 00:06:54.710 --rc genhtml_legend=1 00:06:54.710 --rc geninfo_all_blocks=1 00:06:54.710 --rc geninfo_unexecuted_blocks=1 00:06:54.710 00:06:54.710 ' 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:54.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.710 --rc genhtml_branch_coverage=1 00:06:54.710 --rc genhtml_function_coverage=1 00:06:54.710 --rc genhtml_legend=1 00:06:54.710 --rc geninfo_all_blocks=1 00:06:54.710 --rc geninfo_unexecuted_blocks=1 00:06:54.710 00:06:54.710 ' 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:54.710 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:54.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:54.711 12:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:01.416 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:01.416 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:01.416 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:01.416 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:01.416 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:01.416 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:01.416 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:01.417 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:01.417 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:01.417 Found net devices under 0000:86:00.0: cvl_0_0 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:01.417 Found net devices under 0000:86:00.1: cvl_0_1 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:01.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:01.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:07:01.417 00:07:01.417 --- 10.0.0.2 ping statistics --- 00:07:01.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.417 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:01.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:01.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:07:01.417 00:07:01.417 --- 10.0.0.1 ping statistics --- 00:07:01.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.417 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:01.417 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:01.418 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:01.418 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2689970 00:07:01.418 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2689970 00:07:01.418 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:01.418 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2689970 ']' 00:07:01.418 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.418 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.418 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.418 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.418 12:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:01.418 [2024-11-19 12:59:03.920045] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:01.418 [2024-11-19 12:59:03.920098] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.418 [2024-11-19 12:59:04.003425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.418 [2024-11-19 12:59:04.046250] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:01.418 [2024-11-19 12:59:04.046287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:01.418 [2024-11-19 12:59:04.046294] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:01.418 [2024-11-19 12:59:04.046301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:01.418 [2024-11-19 12:59:04.046307] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:01.418 [2024-11-19 12:59:04.047714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.418 [2024-11-19 12:59:04.047744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.418 [2024-11-19 12:59:04.047746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.418 12:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.418 12:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:01.418 12:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:01.418 12:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:01.418 12:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:01.418 12:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:01.418 12:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:01.418 [2024-11-19 12:59:04.344770] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.418 12:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:01.418 12:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:01.418 12:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:01.677 12:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:01.677 12:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:01.677 12:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:01.937 12:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9e216314-e565-4fc2-86a8-59a07a05f3bf 00:07:01.937 12:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9e216314-e565-4fc2-86a8-59a07a05f3bf lvol 20 00:07:02.196 12:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f3407ea7-7db0-45f8-8b11-09ab48532e20 00:07:02.196 12:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:02.455 12:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f3407ea7-7db0-45f8-8b11-09ab48532e20 00:07:02.455 12:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:02.713 [2024-11-19 12:59:05.999330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:02.713 12:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:02.972 12:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2690426 00:07:02.972 12:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:02.972 12:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:03.910 12:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f3407ea7-7db0-45f8-8b11-09ab48532e20 MY_SNAPSHOT 00:07:04.173 12:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8cb8954f-1280-4240-976b-13a4e8f30d33 00:07:04.173 12:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f3407ea7-7db0-45f8-8b11-09ab48532e20 30 00:07:04.432 12:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8cb8954f-1280-4240-976b-13a4e8f30d33 MY_CLONE 00:07:04.691 12:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f04ba8b6-684f-4ae1-b214-4f024ce51024 00:07:04.691 12:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f04ba8b6-684f-4ae1-b214-4f024ce51024 00:07:05.258 12:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2690426 00:07:13.379 Initializing NVMe Controllers 00:07:13.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:13.379 Controller IO queue size 128, less than required. 00:07:13.379 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:13.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:13.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:13.379 Initialization complete. Launching workers. 00:07:13.379 ======================================================== 00:07:13.379 Latency(us) 00:07:13.379 Device Information : IOPS MiB/s Average min max 00:07:13.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11971.80 46.76 10694.10 1571.81 49855.25 00:07:13.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12132.10 47.39 10554.89 3569.31 56260.01 00:07:13.379 ======================================================== 00:07:13.379 Total : 24103.90 94.16 10624.03 1571.81 56260.01 00:07:13.379 00:07:13.379 12:59:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:13.638 12:59:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f3407ea7-7db0-45f8-8b11-09ab48532e20 00:07:13.896 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9e216314-e565-4fc2-86a8-59a07a05f3bf 00:07:14.155 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:14.155 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:14.155 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:14.156 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:14.156 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:14.156 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:14.156 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:14.156 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:14.156 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:14.156 rmmod nvme_tcp 00:07:14.156 rmmod nvme_fabrics 00:07:14.156 rmmod nvme_keyring 00:07:14.156 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:14.156 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:14.156 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:14.156 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2689970 ']' 00:07:14.156 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2689970 00:07:14.156 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2689970 ']' 00:07:14.156 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2689970 00:07:14.156 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:14.156 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.156 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2689970 00:07:14.156 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.156 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.156 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2689970' 00:07:14.156 killing process with pid 2689970 00:07:14.156 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2689970 00:07:14.156 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2689970 00:07:14.415 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:14.415 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:14.415 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:14.415 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:14.415 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:14.415 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:14.415 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:14.415 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:14.415 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:14.415 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.415 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:14.415 12:59:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.323 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:16.323 00:07:16.323 real 0m22.046s 00:07:16.323 user 1m3.243s 00:07:16.323 sys 0m7.803s 00:07:16.323 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.323 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:16.323 ************************************ 00:07:16.323 END TEST nvmf_lvol 00:07:16.323 ************************************ 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:16.583 ************************************ 00:07:16.583 START TEST nvmf_lvs_grow 00:07:16.583 ************************************ 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:16.583 * Looking for test storage... 00:07:16.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:16.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.583 --rc genhtml_branch_coverage=1 00:07:16.583 --rc genhtml_function_coverage=1 00:07:16.583 --rc genhtml_legend=1 00:07:16.583 --rc geninfo_all_blocks=1 00:07:16.583 --rc geninfo_unexecuted_blocks=1 00:07:16.583 00:07:16.583 ' 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:16.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.583 --rc genhtml_branch_coverage=1 00:07:16.583 --rc genhtml_function_coverage=1 00:07:16.583 --rc genhtml_legend=1 00:07:16.583 --rc geninfo_all_blocks=1 00:07:16.583 --rc geninfo_unexecuted_blocks=1 00:07:16.583 00:07:16.583 ' 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:16.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.583 --rc genhtml_branch_coverage=1 00:07:16.583 --rc genhtml_function_coverage=1 00:07:16.583 --rc genhtml_legend=1 00:07:16.583 --rc geninfo_all_blocks=1 00:07:16.583 --rc geninfo_unexecuted_blocks=1 00:07:16.583 00:07:16.583 ' 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:16.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.583 --rc genhtml_branch_coverage=1 00:07:16.583 --rc genhtml_function_coverage=1 00:07:16.583 --rc genhtml_legend=1 00:07:16.583 --rc geninfo_all_blocks=1 00:07:16.583 --rc geninfo_unexecuted_blocks=1 00:07:16.583 00:07:16.583 ' 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:16.583 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:16.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:16.843 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:16.844 12:59:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:23.415 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:23.415 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:23.415 Found net devices under 0000:86:00.0: cvl_0_0 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.415 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:23.416 Found net devices under 0000:86:00.1: cvl_0_1 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:23.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:23.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:07:23.416 00:07:23.416 --- 10.0.0.2 ping statistics --- 00:07:23.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.416 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:23.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:23.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:07:23.416 00:07:23.416 --- 10.0.0.1 ping statistics --- 00:07:23.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.416 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2695855 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2695855 00:07:23.416 12:59:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2695855 ']' 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:23.416 [2024-11-19 12:59:26.052123] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:23.416 [2024-11-19 12:59:26.052168] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.416 [2024-11-19 12:59:26.133361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.416 [2024-11-19 12:59:26.174250] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:23.416 [2024-11-19 12:59:26.174286] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:23.416 [2024-11-19 12:59:26.174294] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:23.416 [2024-11-19 12:59:26.174300] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:23.416 [2024-11-19 12:59:26.174305] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:23.416 [2024-11-19 12:59:26.174853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:23.416 [2024-11-19 12:59:26.474641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:23.416 ************************************ 00:07:23.416 START TEST lvs_grow_clean 00:07:23.416 ************************************ 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:23.416 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:23.676 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3a44c036-bc37-4ef5-9f08-ccff81f0a1f9 00:07:23.676 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a44c036-bc37-4ef5-9f08-ccff81f0a1f9 00:07:23.676 12:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:23.935 12:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:23.935 12:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:23.935 12:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3a44c036-bc37-4ef5-9f08-ccff81f0a1f9 lvol 150 00:07:24.193 12:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=dba32384-3fcc-48ed-8396-b135a73a6b7a 00:07:24.193 12:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:24.193 12:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:24.193 [2024-11-19 12:59:27.532974] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:24.193 [2024-11-19 12:59:27.533024] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:24.193 true 00:07:24.194 12:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a44c036-bc37-4ef5-9f08-ccff81f0a1f9 00:07:24.194 12:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:24.452 12:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:24.452 12:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:24.711 12:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dba32384-3fcc-48ed-8396-b135a73a6b7a 00:07:24.969 12:59:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:24.969 [2024-11-19 12:59:28.275214] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:24.969 12:59:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:25.228 12:59:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2696316 00:07:25.228 12:59:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:25.228 12:59:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:25.228 12:59:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2696316 /var/tmp/bdevperf.sock 00:07:25.228 12:59:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2696316 ']' 00:07:25.228 12:59:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:25.228 12:59:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.228 12:59:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:25.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:25.228 12:59:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.228 12:59:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:25.228 [2024-11-19 12:59:28.523924] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:25.228 [2024-11-19 12:59:28.523979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2696316 ] 00:07:25.228 [2024-11-19 12:59:28.599801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.487 [2024-11-19 12:59:28.642200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.487 12:59:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.487 12:59:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:25.487 12:59:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:25.746 Nvme0n1 00:07:25.746 12:59:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:26.014 [ 00:07:26.014 { 00:07:26.014 "name": "Nvme0n1", 00:07:26.014 "aliases": [ 00:07:26.014 "dba32384-3fcc-48ed-8396-b135a73a6b7a" 00:07:26.014 ], 00:07:26.014 "product_name": "NVMe disk", 00:07:26.014 "block_size": 4096, 00:07:26.014 "num_blocks": 38912, 00:07:26.014 "uuid": "dba32384-3fcc-48ed-8396-b135a73a6b7a", 00:07:26.014 "numa_id": 1, 00:07:26.014 "assigned_rate_limits": { 00:07:26.014 "rw_ios_per_sec": 0, 00:07:26.014 "rw_mbytes_per_sec": 0, 00:07:26.014 "r_mbytes_per_sec": 0, 00:07:26.014 "w_mbytes_per_sec": 0 00:07:26.014 }, 00:07:26.014 "claimed": false, 00:07:26.014 "zoned": false, 00:07:26.014 "supported_io_types": { 00:07:26.014 "read": true, 00:07:26.014 "write": true, 00:07:26.014 "unmap": true, 00:07:26.014 "flush": true, 00:07:26.014 "reset": true, 00:07:26.014 "nvme_admin": true, 00:07:26.014 "nvme_io": true, 00:07:26.014 "nvme_io_md": false, 00:07:26.014 "write_zeroes": true, 00:07:26.014 "zcopy": false, 00:07:26.014 "get_zone_info": false, 00:07:26.014 "zone_management": false, 00:07:26.014 "zone_append": false, 00:07:26.014 "compare": true, 00:07:26.014 "compare_and_write": true, 00:07:26.014 "abort": true, 00:07:26.014 "seek_hole": false, 00:07:26.014 "seek_data": false, 00:07:26.014 "copy": true, 00:07:26.014 "nvme_iov_md": false 00:07:26.014 }, 00:07:26.014 "memory_domains": [ 00:07:26.014 { 00:07:26.014 "dma_device_id": "system", 00:07:26.014 "dma_device_type": 1 00:07:26.014 } 00:07:26.014 ], 00:07:26.014 "driver_specific": { 00:07:26.014 "nvme": [ 00:07:26.014 { 00:07:26.014 "trid": { 00:07:26.014 "trtype": "TCP", 00:07:26.014 "adrfam": "IPv4", 00:07:26.014 "traddr": "10.0.0.2", 00:07:26.014 "trsvcid": "4420", 00:07:26.014 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:26.014 }, 00:07:26.014 "ctrlr_data": { 00:07:26.014 "cntlid": 1, 00:07:26.014 "vendor_id": "0x8086", 00:07:26.014 "model_number": "SPDK bdev Controller", 00:07:26.014 "serial_number": "SPDK0", 00:07:26.014 "firmware_revision": "25.01", 00:07:26.014 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:26.014 "oacs": { 00:07:26.014 "security": 0, 00:07:26.014 "format": 0, 00:07:26.014 "firmware": 0, 00:07:26.014 "ns_manage": 0 00:07:26.014 }, 00:07:26.014 "multi_ctrlr": true, 00:07:26.014 "ana_reporting": false 00:07:26.014 }, 00:07:26.014 "vs": { 00:07:26.014 "nvme_version": "1.3" 00:07:26.014 }, 00:07:26.014 "ns_data": { 00:07:26.014 "id": 1, 00:07:26.014 "can_share": true 00:07:26.014 } 00:07:26.014 } 00:07:26.014 ], 00:07:26.014 "mp_policy": "active_passive" 00:07:26.014 } 00:07:26.014 } 00:07:26.014 ] 00:07:26.014 12:59:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2696366 00:07:26.014 12:59:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:26.014 12:59:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:26.014 Running I/O for 10 seconds... 00:07:26.949 Latency(us) 00:07:26.949 [2024-11-19T11:59:30.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.949 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.949 Nvme0n1 : 1.00 22310.00 87.15 0.00 0.00 0.00 0.00 0.00 00:07:26.949 [2024-11-19T11:59:30.326Z] =================================================================================================================== 00:07:26.949 [2024-11-19T11:59:30.326Z] Total : 22310.00 87.15 0.00 0.00 0.00 0.00 0.00 00:07:26.949 00:07:27.888 12:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3a44c036-bc37-4ef5-9f08-ccff81f0a1f9 00:07:28.146 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.146 Nvme0n1 : 2.00 22493.50 87.87 0.00 0.00 0.00 0.00 0.00 00:07:28.146 [2024-11-19T11:59:31.523Z] =================================================================================================================== 00:07:28.146 [2024-11-19T11:59:31.524Z] Total : 22493.50 87.87 0.00 0.00 0.00 0.00 0.00 00:07:28.147 00:07:28.147 true 00:07:28.147 12:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a44c036-bc37-4ef5-9f08-ccff81f0a1f9 00:07:28.147 12:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:28.405 12:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:28.405 12:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:28.405 12:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2696366 00:07:28.974 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.974 Nvme0n1 : 3.00 22645.67 88.46 0.00 0.00 0.00 0.00 0.00 00:07:28.974 [2024-11-19T11:59:32.351Z] =================================================================================================================== 00:07:28.974 [2024-11-19T11:59:32.351Z] Total : 22645.67 88.46 0.00 0.00 0.00 0.00 0.00 00:07:28.974 00:07:30.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.352 Nvme0n1 : 4.00 22754.25 88.88 0.00 0.00 0.00 0.00 0.00 00:07:30.352 [2024-11-19T11:59:33.729Z] =================================================================================================================== 00:07:30.352 [2024-11-19T11:59:33.729Z] Total : 22754.25 88.88 0.00 0.00 0.00 0.00 0.00 00:07:30.352 00:07:31.290 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.290 Nvme0n1 : 5.00 22827.40 89.17 0.00 0.00 0.00 0.00 0.00 00:07:31.290 [2024-11-19T11:59:34.667Z] =================================================================================================================== 00:07:31.290 [2024-11-19T11:59:34.667Z] Total : 22827.40 89.17 0.00 0.00 0.00 0.00 0.00 00:07:31.290 00:07:32.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.227 Nvme0n1 : 6.00 22873.00 89.35 0.00 0.00 0.00 0.00 0.00 00:07:32.227 [2024-11-19T11:59:35.604Z] =================================================================================================================== 00:07:32.227 [2024-11-19T11:59:35.604Z] Total : 22873.00 89.35 0.00 0.00 0.00 0.00 0.00 00:07:32.227 00:07:33.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.163 Nvme0n1 : 7.00 22917.71 89.52 0.00 0.00 0.00 0.00 0.00 00:07:33.163 [2024-11-19T11:59:36.540Z] =================================================================================================================== 00:07:33.163 [2024-11-19T11:59:36.540Z] Total : 22917.71 89.52 0.00 0.00 0.00 0.00 0.00 00:07:33.163 00:07:34.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.100 Nvme0n1 : 8.00 22950.25 89.65 0.00 0.00 0.00 0.00 0.00 00:07:34.100 [2024-11-19T11:59:37.477Z] =================================================================================================================== 00:07:34.100 [2024-11-19T11:59:37.477Z] Total : 22950.25 89.65 0.00 0.00 0.00 0.00 0.00 00:07:34.100 00:07:35.035 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.035 Nvme0n1 : 9.00 22976.00 89.75 0.00 0.00 0.00 0.00 0.00 00:07:35.035 [2024-11-19T11:59:38.412Z] =================================================================================================================== 00:07:35.035 [2024-11-19T11:59:38.412Z] Total : 22976.00 89.75 0.00 0.00 0.00 0.00 0.00 00:07:35.035 00:07:35.971 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.971 Nvme0n1 : 10.00 23002.50 89.85 0.00 0.00 0.00 0.00 0.00 00:07:35.971 [2024-11-19T11:59:39.348Z] =================================================================================================================== 00:07:35.971 [2024-11-19T11:59:39.348Z] Total : 23002.50 89.85 0.00 0.00 0.00 0.00 0.00 00:07:35.971 00:07:35.971 00:07:35.971 Latency(us) 00:07:35.971 [2024-11-19T11:59:39.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.971 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.971 Nvme0n1 : 10.01 23002.97 89.86 0.00 0.00 5561.56 3191.32 10314.80 00:07:35.971 [2024-11-19T11:59:39.348Z] =================================================================================================================== 00:07:35.971 [2024-11-19T11:59:39.348Z] Total : 23002.97 89.86 0.00 0.00 5561.56 3191.32 10314.80 00:07:35.971 { 00:07:35.971 "results": [ 00:07:35.971 { 00:07:35.971 "job": "Nvme0n1", 00:07:35.971 "core_mask": "0x2", 00:07:35.971 "workload": "randwrite", 00:07:35.971 "status": "finished", 00:07:35.971 "queue_depth": 128, 00:07:35.971 "io_size": 4096, 00:07:35.971 "runtime": 10.005362, 00:07:35.971 "iops": 23002.96580973282, 00:07:35.971 "mibps": 89.85533519426883, 00:07:35.971 "io_failed": 0, 00:07:35.971 "io_timeout": 0, 00:07:35.971 "avg_latency_us": 5561.555254687855, 00:07:35.971 "min_latency_us": 3191.318260869565, 00:07:35.971 "max_latency_us": 10314.79652173913 00:07:35.971 } 00:07:35.971 ], 00:07:35.971 "core_count": 1 00:07:35.971 } 00:07:36.229 12:59:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2696316 00:07:36.229 12:59:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2696316 ']' 00:07:36.229 12:59:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2696316 00:07:36.229 12:59:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:36.229 12:59:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.229 12:59:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2696316 00:07:36.229 12:59:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:36.229 12:59:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:36.229 12:59:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2696316' 00:07:36.229 killing process with pid 2696316 00:07:36.229 12:59:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2696316 00:07:36.229 Received shutdown signal, test time was about 10.000000 seconds 00:07:36.229 00:07:36.229 Latency(us) 00:07:36.229 [2024-11-19T11:59:39.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.229 [2024-11-19T11:59:39.606Z] =================================================================================================================== 00:07:36.230 [2024-11-19T11:59:39.607Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:36.230 12:59:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2696316 00:07:36.230 12:59:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:36.488 12:59:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:36.746 12:59:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a44c036-bc37-4ef5-9f08-ccff81f0a1f9 00:07:36.746 12:59:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:37.005 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:37.005 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:37.005 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:37.005 [2024-11-19 12:59:40.347653] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:37.005 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a44c036-bc37-4ef5-9f08-ccff81f0a1f9 00:07:37.005 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:37.005 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a44c036-bc37-4ef5-9f08-ccff81f0a1f9 00:07:37.005 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.264 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.264 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.264 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.264 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.264 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.264 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.264 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:37.264 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a44c036-bc37-4ef5-9f08-ccff81f0a1f9 00:07:37.264 request: 00:07:37.264 { 00:07:37.264 "uuid": "3a44c036-bc37-4ef5-9f08-ccff81f0a1f9", 00:07:37.264 "method": "bdev_lvol_get_lvstores", 00:07:37.264 "req_id": 1 00:07:37.264 } 00:07:37.264 Got JSON-RPC error response 00:07:37.264 response: 00:07:37.264 { 00:07:37.264 "code": -19, 00:07:37.264 "message": "No such device" 00:07:37.264 } 00:07:37.264 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:37.264 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:37.264 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:37.264 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:37.264 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:37.523 aio_bdev 00:07:37.523 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dba32384-3fcc-48ed-8396-b135a73a6b7a 00:07:37.523 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=dba32384-3fcc-48ed-8396-b135a73a6b7a 00:07:37.523 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:37.523 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:37.523 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:37.523 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:37.523 12:59:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:37.782 12:59:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dba32384-3fcc-48ed-8396-b135a73a6b7a -t 2000 00:07:38.041 [ 00:07:38.041 { 00:07:38.041 "name": "dba32384-3fcc-48ed-8396-b135a73a6b7a", 00:07:38.041 "aliases": [ 00:07:38.041 "lvs/lvol" 00:07:38.041 ], 00:07:38.041 "product_name": "Logical Volume", 00:07:38.041 "block_size": 4096, 00:07:38.041 "num_blocks": 38912, 00:07:38.041 "uuid": "dba32384-3fcc-48ed-8396-b135a73a6b7a", 00:07:38.041 "assigned_rate_limits": { 00:07:38.041 "rw_ios_per_sec": 0, 00:07:38.041 "rw_mbytes_per_sec": 0, 00:07:38.041 "r_mbytes_per_sec": 0, 00:07:38.041 "w_mbytes_per_sec": 0 00:07:38.041 }, 00:07:38.041 "claimed": false, 00:07:38.041 "zoned": false, 00:07:38.041 "supported_io_types": { 00:07:38.041 "read": true, 00:07:38.041 "write": true, 00:07:38.041 "unmap": true, 00:07:38.041 "flush": false, 00:07:38.041 "reset": true, 00:07:38.041 "nvme_admin": false, 00:07:38.041 "nvme_io": false, 00:07:38.041 "nvme_io_md": false, 00:07:38.041 "write_zeroes": true, 00:07:38.041 "zcopy": false, 00:07:38.041 "get_zone_info": false, 00:07:38.041 "zone_management": false, 00:07:38.041 "zone_append": false, 00:07:38.041 "compare": false, 00:07:38.041 "compare_and_write": false, 00:07:38.041 "abort": false, 00:07:38.041 "seek_hole": true, 00:07:38.041 "seek_data": true, 00:07:38.041 "copy": false, 00:07:38.041 "nvme_iov_md": false 00:07:38.041 }, 00:07:38.041 "driver_specific": { 00:07:38.041 "lvol": { 00:07:38.041 "lvol_store_uuid": "3a44c036-bc37-4ef5-9f08-ccff81f0a1f9", 00:07:38.041 "base_bdev": "aio_bdev", 00:07:38.041 "thin_provision": false, 00:07:38.041 "num_allocated_clusters": 38, 00:07:38.041 "snapshot": false, 00:07:38.041 "clone": false, 00:07:38.041 "esnap_clone": false 00:07:38.041 } 00:07:38.041 } 00:07:38.042 } 00:07:38.042 ] 00:07:38.042 12:59:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:38.042 12:59:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a44c036-bc37-4ef5-9f08-ccff81f0a1f9 00:07:38.042 12:59:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:38.042 12:59:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:38.042 12:59:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:38.042 12:59:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a44c036-bc37-4ef5-9f08-ccff81f0a1f9 00:07:38.301 12:59:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:38.301 12:59:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dba32384-3fcc-48ed-8396-b135a73a6b7a 00:07:38.560 12:59:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3a44c036-bc37-4ef5-9f08-ccff81f0a1f9 00:07:38.819 12:59:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:38.819 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:39.077 00:07:39.077 real 0m15.675s 00:07:39.077 user 0m15.216s 00:07:39.077 sys 0m1.516s 00:07:39.077 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.077 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:39.077 ************************************ 00:07:39.077 END TEST lvs_grow_clean 00:07:39.077 ************************************ 00:07:39.077 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:39.077 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:39.077 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.077 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:39.077 ************************************ 00:07:39.077 START TEST lvs_grow_dirty 00:07:39.077 ************************************ 00:07:39.077 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:39.077 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:39.077 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:39.077 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:39.077 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:39.077 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:39.077 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:39.077 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:39.077 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:39.077 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:39.335 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:39.335 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:39.335 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ae91e3be-a93b-4b6b-8062-37cac6478b8b 00:07:39.335 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae91e3be-a93b-4b6b-8062-37cac6478b8b 00:07:39.335 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:39.593 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:39.593 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:39.593 12:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ae91e3be-a93b-4b6b-8062-37cac6478b8b lvol 150 00:07:39.851 12:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=66f7dfb6-4381-4943-b53d-3280ab0e959d 00:07:39.851 12:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:39.852 12:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:40.110 [2024-11-19 12:59:43.238858] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:40.110 [2024-11-19 12:59:43.238910] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:40.110 true 00:07:40.110 12:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae91e3be-a93b-4b6b-8062-37cac6478b8b 00:07:40.110 12:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:40.110 12:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:40.110 12:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:40.369 12:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 66f7dfb6-4381-4943-b53d-3280ab0e959d 00:07:40.628 12:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:40.628 [2024-11-19 12:59:43.973032] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.628 12:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:40.886 12:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2698961 00:07:40.886 12:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:40.886 12:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:40.886 12:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2698961 /var/tmp/bdevperf.sock 00:07:40.886 12:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2698961 ']' 00:07:40.886 12:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:40.886 12:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.886 12:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:40.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:40.886 12:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.886 12:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:40.886 [2024-11-19 12:59:44.208904] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:40.886 [2024-11-19 12:59:44.208956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2698961 ] 00:07:41.145 [2024-11-19 12:59:44.285215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.145 [2024-11-19 12:59:44.325898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.145 12:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.145 12:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:41.145 12:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:41.711 Nvme0n1 00:07:41.711 12:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:41.711 [ 00:07:41.711 { 00:07:41.711 "name": "Nvme0n1", 00:07:41.711 "aliases": [ 00:07:41.711 "66f7dfb6-4381-4943-b53d-3280ab0e959d" 00:07:41.711 ], 00:07:41.711 "product_name": "NVMe disk", 00:07:41.711 "block_size": 4096, 00:07:41.711 "num_blocks": 38912, 00:07:41.711 "uuid": "66f7dfb6-4381-4943-b53d-3280ab0e959d", 00:07:41.711 "numa_id": 1, 00:07:41.711 "assigned_rate_limits": { 00:07:41.711 "rw_ios_per_sec": 0, 00:07:41.711 "rw_mbytes_per_sec": 0, 00:07:41.711 "r_mbytes_per_sec": 0, 00:07:41.711 "w_mbytes_per_sec": 0 00:07:41.711 }, 00:07:41.711 "claimed": false, 00:07:41.711 "zoned": false, 00:07:41.711 "supported_io_types": { 00:07:41.711 "read": true, 00:07:41.711 "write": true, 00:07:41.711 "unmap": true, 00:07:41.711 "flush": true, 00:07:41.711 "reset": true, 00:07:41.711 "nvme_admin": true, 00:07:41.711 "nvme_io": true, 00:07:41.711 "nvme_io_md": false, 00:07:41.711 "write_zeroes": true, 00:07:41.711 "zcopy": false, 00:07:41.711 "get_zone_info": false, 00:07:41.711 "zone_management": false, 00:07:41.711 "zone_append": false, 00:07:41.711 "compare": true, 00:07:41.711 "compare_and_write": true, 00:07:41.711 "abort": true, 00:07:41.711 "seek_hole": false, 00:07:41.711 "seek_data": false, 00:07:41.711 "copy": true, 00:07:41.711 "nvme_iov_md": false 00:07:41.711 }, 00:07:41.711 "memory_domains": [ 00:07:41.711 { 00:07:41.711 "dma_device_id": "system", 00:07:41.711 "dma_device_type": 1 00:07:41.711 } 00:07:41.711 ], 00:07:41.711 "driver_specific": { 00:07:41.711 "nvme": [ 00:07:41.711 { 00:07:41.711 "trid": { 00:07:41.711 "trtype": "TCP", 00:07:41.711 "adrfam": "IPv4", 00:07:41.711 "traddr": "10.0.0.2", 00:07:41.711 "trsvcid": "4420", 00:07:41.711 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:41.711 }, 00:07:41.711 "ctrlr_data": { 00:07:41.711 "cntlid": 1, 00:07:41.711 "vendor_id": "0x8086", 00:07:41.711 "model_number": "SPDK bdev Controller", 00:07:41.711 "serial_number": "SPDK0", 00:07:41.711 "firmware_revision": "25.01", 00:07:41.711 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:41.711 "oacs": { 00:07:41.711 "security": 0, 00:07:41.711 "format": 0, 00:07:41.711 "firmware": 0, 00:07:41.711 "ns_manage": 0 00:07:41.711 }, 00:07:41.711 "multi_ctrlr": true, 00:07:41.711 "ana_reporting": false 00:07:41.711 }, 00:07:41.711 "vs": { 00:07:41.711 "nvme_version": "1.3" 00:07:41.711 }, 00:07:41.711 "ns_data": { 00:07:41.711 "id": 1, 00:07:41.711 "can_share": true 00:07:41.711 } 00:07:41.711 } 00:07:41.711 ], 00:07:41.711 "mp_policy": "active_passive" 00:07:41.711 } 00:07:41.711 } 00:07:41.711 ] 00:07:41.711 12:59:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2699182 00:07:41.711 12:59:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:41.711 12:59:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:41.969 Running I/O for 10 seconds... 00:07:42.905 Latency(us) 00:07:42.905 [2024-11-19T11:59:46.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.905 Nvme0n1 : 1.00 22743.00 88.84 0.00 0.00 0.00 0.00 0.00 00:07:42.905 [2024-11-19T11:59:46.282Z] =================================================================================================================== 00:07:42.905 [2024-11-19T11:59:46.282Z] Total : 22743.00 88.84 0.00 0.00 0.00 0.00 0.00 00:07:42.905 00:07:43.842 12:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ae91e3be-a93b-4b6b-8062-37cac6478b8b 00:07:43.842 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.842 Nvme0n1 : 2.00 22854.00 89.27 0.00 0.00 0.00 0.00 0.00 00:07:43.842 [2024-11-19T11:59:47.219Z] =================================================================================================================== 00:07:43.842 [2024-11-19T11:59:47.219Z] Total : 22854.00 89.27 0.00 0.00 0.00 0.00 0.00 00:07:43.842 00:07:44.100 true 00:07:44.100 12:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae91e3be-a93b-4b6b-8062-37cac6478b8b 00:07:44.100 12:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:44.358 12:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:44.358 12:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:44.358 12:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2699182 00:07:44.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.925 Nvme0n1 : 3.00 22878.67 89.37 0.00 0.00 0.00 0.00 0.00 00:07:44.925 [2024-11-19T11:59:48.302Z] =================================================================================================================== 00:07:44.925 [2024-11-19T11:59:48.302Z] Total : 22878.67 89.37 0.00 0.00 0.00 0.00 0.00 00:07:44.925 00:07:45.860 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.860 Nvme0n1 : 4.00 22935.75 89.59 0.00 0.00 0.00 0.00 0.00 00:07:45.860 [2024-11-19T11:59:49.237Z] =================================================================================================================== 00:07:45.860 [2024-11-19T11:59:49.237Z] Total : 22935.75 89.59 0.00 0.00 0.00 0.00 0.00 00:07:45.860 00:07:47.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.235 Nvme0n1 : 5.00 22951.60 89.65 0.00 0.00 0.00 0.00 0.00 00:07:47.235 [2024-11-19T11:59:50.612Z] =================================================================================================================== 00:07:47.235 [2024-11-19T11:59:50.612Z] Total : 22951.60 89.65 0.00 0.00 0.00 0.00 0.00 00:07:47.235 00:07:48.171 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.171 Nvme0n1 : 6.00 22979.33 89.76 0.00 0.00 0.00 0.00 0.00 00:07:48.172 [2024-11-19T11:59:51.549Z] =================================================================================================================== 00:07:48.172 [2024-11-19T11:59:51.549Z] Total : 22979.33 89.76 0.00 0.00 0.00 0.00 0.00 00:07:48.172 00:07:49.108 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.108 Nvme0n1 : 7.00 23002.00 89.85 0.00 0.00 0.00 0.00 0.00 00:07:49.108 [2024-11-19T11:59:52.485Z] =================================================================================================================== 00:07:49.108 [2024-11-19T11:59:52.485Z] Total : 23002.00 89.85 0.00 0.00 0.00 0.00 0.00 00:07:49.108 00:07:50.045 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.045 Nvme0n1 : 8.00 23017.75 89.91 0.00 0.00 0.00 0.00 0.00 00:07:50.045 [2024-11-19T11:59:53.422Z] =================================================================================================================== 00:07:50.045 [2024-11-19T11:59:53.422Z] Total : 23017.75 89.91 0.00 0.00 0.00 0.00 0.00 00:07:50.045 00:07:50.987 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.987 Nvme0n1 : 9.00 23042.89 90.01 0.00 0.00 0.00 0.00 0.00 00:07:50.987 [2024-11-19T11:59:54.364Z] =================================================================================================================== 00:07:50.987 [2024-11-19T11:59:54.364Z] Total : 23042.89 90.01 0.00 0.00 0.00 0.00 0.00 00:07:50.987 00:07:52.055 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.055 Nvme0n1 : 10.00 23052.00 90.05 0.00 0.00 0.00 0.00 0.00 00:07:52.055 [2024-11-19T11:59:55.432Z] =================================================================================================================== 00:07:52.055 [2024-11-19T11:59:55.432Z] Total : 23052.00 90.05 0.00 0.00 0.00 0.00 0.00 00:07:52.055 00:07:52.055 00:07:52.055 Latency(us) 00:07:52.055 [2024-11-19T11:59:55.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.055 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.055 Nvme0n1 : 10.00 23056.13 90.06 0.00 0.00 5548.81 1453.19 10143.83 00:07:52.055 [2024-11-19T11:59:55.432Z] =================================================================================================================== 00:07:52.055 [2024-11-19T11:59:55.432Z] Total : 23056.13 90.06 0.00 0.00 5548.81 1453.19 10143.83 00:07:52.055 { 00:07:52.055 "results": [ 00:07:52.055 { 00:07:52.055 "job": "Nvme0n1", 00:07:52.055 "core_mask": "0x2", 00:07:52.055 "workload": "randwrite", 00:07:52.055 "status": "finished", 00:07:52.055 "queue_depth": 128, 00:07:52.055 "io_size": 4096, 00:07:52.055 "runtime": 10.003022, 00:07:52.055 "iops": 23056.132436777607, 00:07:52.055 "mibps": 90.06301733116253, 00:07:52.055 "io_failed": 0, 00:07:52.055 "io_timeout": 0, 00:07:52.055 "avg_latency_us": 5548.8127603947805, 00:07:52.055 "min_latency_us": 1453.1895652173912, 00:07:52.055 "max_latency_us": 10143.83304347826 00:07:52.055 } 00:07:52.055 ], 00:07:52.055 "core_count": 1 00:07:52.055 } 00:07:52.055 12:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2698961 00:07:52.055 12:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2698961 ']' 00:07:52.055 12:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2698961 00:07:52.055 12:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:52.055 12:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.055 12:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2698961 00:07:52.055 12:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:52.055 12:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:52.055 12:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2698961' 00:07:52.055 killing process with pid 2698961 00:07:52.055 12:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2698961 00:07:52.055 Received shutdown signal, test time was about 10.000000 seconds 00:07:52.055 00:07:52.055 Latency(us) 00:07:52.055 [2024-11-19T11:59:55.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.055 [2024-11-19T11:59:55.432Z] =================================================================================================================== 00:07:52.055 [2024-11-19T11:59:55.432Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:52.055 12:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2698961 00:07:52.332 12:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:52.333 12:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:52.591 12:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae91e3be-a93b-4b6b-8062-37cac6478b8b 00:07:52.591 12:59:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:52.850 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:52.850 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:52.850 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2695855 00:07:52.850 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2695855 00:07:52.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2695855 Killed "${NVMF_APP[@]}" "$@" 00:07:52.850 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:52.850 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:52.850 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:52.850 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.850 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:52.850 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2701032 00:07:52.850 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2701032 00:07:52.850 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:52.850 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2701032 ']' 00:07:52.850 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.850 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.850 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.850 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.850 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:52.850 [2024-11-19 12:59:56.152988] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:52.851 [2024-11-19 12:59:56.153037] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.110 [2024-11-19 12:59:56.232137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.110 [2024-11-19 12:59:56.273150] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.110 [2024-11-19 12:59:56.273185] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.110 [2024-11-19 12:59:56.273192] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.110 [2024-11-19 12:59:56.273198] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.110 [2024-11-19 12:59:56.273206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.110 [2024-11-19 12:59:56.273785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.110 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.110 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:53.110 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:53.110 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:53.110 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:53.110 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.110 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:53.369 [2024-11-19 12:59:56.567113] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:53.369 [2024-11-19 12:59:56.567206] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:53.369 [2024-11-19 12:59:56.567231] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:53.369 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:53.369 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 66f7dfb6-4381-4943-b53d-3280ab0e959d 00:07:53.369 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=66f7dfb6-4381-4943-b53d-3280ab0e959d 00:07:53.369 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:53.369 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:53.369 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:53.369 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:53.369 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:53.628 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 66f7dfb6-4381-4943-b53d-3280ab0e959d -t 2000 00:07:53.628 [ 00:07:53.628 { 00:07:53.628 "name": "66f7dfb6-4381-4943-b53d-3280ab0e959d", 00:07:53.628 "aliases": [ 00:07:53.628 "lvs/lvol" 00:07:53.628 ], 00:07:53.628 "product_name": "Logical Volume", 00:07:53.628 "block_size": 4096, 00:07:53.628 "num_blocks": 38912, 00:07:53.628 "uuid": "66f7dfb6-4381-4943-b53d-3280ab0e959d", 00:07:53.628 "assigned_rate_limits": { 00:07:53.628 "rw_ios_per_sec": 0, 00:07:53.628 "rw_mbytes_per_sec": 0, 00:07:53.628 "r_mbytes_per_sec": 0, 00:07:53.628 "w_mbytes_per_sec": 0 00:07:53.628 }, 00:07:53.628 "claimed": false, 00:07:53.628 "zoned": false, 00:07:53.628 "supported_io_types": { 00:07:53.628 "read": true, 00:07:53.628 "write": true, 00:07:53.628 "unmap": true, 00:07:53.628 "flush": false, 00:07:53.628 "reset": true, 00:07:53.628 "nvme_admin": false, 00:07:53.628 "nvme_io": false, 00:07:53.628 "nvme_io_md": false, 00:07:53.628 "write_zeroes": true, 00:07:53.628 "zcopy": false, 00:07:53.628 "get_zone_info": false, 00:07:53.628 "zone_management": false, 00:07:53.628 "zone_append": false, 00:07:53.628 "compare": false, 00:07:53.628 "compare_and_write": false, 00:07:53.628 "abort": false, 00:07:53.628 "seek_hole": true, 00:07:53.628 "seek_data": true, 00:07:53.628 "copy": false, 00:07:53.628 "nvme_iov_md": false 00:07:53.628 }, 00:07:53.628 "driver_specific": { 00:07:53.628 "lvol": { 00:07:53.628 "lvol_store_uuid": "ae91e3be-a93b-4b6b-8062-37cac6478b8b", 00:07:53.628 "base_bdev": "aio_bdev", 00:07:53.628 "thin_provision": false, 00:07:53.628 "num_allocated_clusters": 38, 00:07:53.628 "snapshot": false, 00:07:53.628 "clone": false, 00:07:53.628 "esnap_clone": false 00:07:53.628 } 00:07:53.628 } 00:07:53.628 } 00:07:53.628 ] 00:07:53.628 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:53.628 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae91e3be-a93b-4b6b-8062-37cac6478b8b 00:07:53.628 12:59:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:53.887 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:53.887 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae91e3be-a93b-4b6b-8062-37cac6478b8b 00:07:53.887 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:54.146 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:54.146 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:54.405 [2024-11-19 12:59:57.560331] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:54.405 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae91e3be-a93b-4b6b-8062-37cac6478b8b 00:07:54.405 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:54.405 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae91e3be-a93b-4b6b-8062-37cac6478b8b 00:07:54.405 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.405 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.405 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.405 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.405 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.405 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.405 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.405 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:54.405 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae91e3be-a93b-4b6b-8062-37cac6478b8b 00:07:54.405 request: 00:07:54.405 { 00:07:54.405 "uuid": "ae91e3be-a93b-4b6b-8062-37cac6478b8b", 00:07:54.405 "method": "bdev_lvol_get_lvstores", 00:07:54.405 "req_id": 1 00:07:54.405 } 00:07:54.405 Got JSON-RPC error response 00:07:54.405 response: 00:07:54.405 { 00:07:54.405 "code": -19, 00:07:54.405 "message": "No such device" 00:07:54.405 } 00:07:54.667 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:54.667 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:54.667 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:54.667 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:54.667 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:54.667 aio_bdev 00:07:54.667 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 66f7dfb6-4381-4943-b53d-3280ab0e959d 00:07:54.667 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=66f7dfb6-4381-4943-b53d-3280ab0e959d 00:07:54.667 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:54.667 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:54.667 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:54.667 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:54.667 12:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:54.926 12:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 66f7dfb6-4381-4943-b53d-3280ab0e959d -t 2000 00:07:55.185 [ 00:07:55.185 { 00:07:55.185 "name": "66f7dfb6-4381-4943-b53d-3280ab0e959d", 00:07:55.185 "aliases": [ 00:07:55.185 "lvs/lvol" 00:07:55.185 ], 00:07:55.185 "product_name": "Logical Volume", 00:07:55.185 "block_size": 4096, 00:07:55.185 "num_blocks": 38912, 00:07:55.185 "uuid": "66f7dfb6-4381-4943-b53d-3280ab0e959d", 00:07:55.185 "assigned_rate_limits": { 00:07:55.185 "rw_ios_per_sec": 0, 00:07:55.185 "rw_mbytes_per_sec": 0, 00:07:55.185 "r_mbytes_per_sec": 0, 00:07:55.185 "w_mbytes_per_sec": 0 00:07:55.185 }, 00:07:55.185 "claimed": false, 00:07:55.185 "zoned": false, 00:07:55.185 "supported_io_types": { 00:07:55.185 "read": true, 00:07:55.185 "write": true, 00:07:55.185 "unmap": true, 00:07:55.185 "flush": false, 00:07:55.185 "reset": true, 00:07:55.185 "nvme_admin": false, 00:07:55.185 "nvme_io": false, 00:07:55.185 "nvme_io_md": false, 00:07:55.185 "write_zeroes": true, 00:07:55.185 "zcopy": false, 00:07:55.185 "get_zone_info": false, 00:07:55.185 "zone_management": false, 00:07:55.185 "zone_append": false, 00:07:55.185 "compare": false, 00:07:55.185 "compare_and_write": false, 00:07:55.185 "abort": false, 00:07:55.185 "seek_hole": true, 00:07:55.185 "seek_data": true, 00:07:55.185 "copy": false, 00:07:55.185 "nvme_iov_md": false 00:07:55.185 }, 00:07:55.185 "driver_specific": { 00:07:55.185 "lvol": { 00:07:55.185 "lvol_store_uuid": "ae91e3be-a93b-4b6b-8062-37cac6478b8b", 00:07:55.185 "base_bdev": "aio_bdev", 00:07:55.185 "thin_provision": false, 00:07:55.185 "num_allocated_clusters": 38, 00:07:55.185 "snapshot": false, 00:07:55.185 "clone": false, 00:07:55.185 "esnap_clone": false 00:07:55.185 } 00:07:55.185 } 00:07:55.185 } 00:07:55.185 ] 00:07:55.185 12:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:55.185 12:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae91e3be-a93b-4b6b-8062-37cac6478b8b 00:07:55.185 12:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:55.185 12:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:55.185 12:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae91e3be-a93b-4b6b-8062-37cac6478b8b 00:07:55.185 12:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:55.445 12:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:55.445 12:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 66f7dfb6-4381-4943-b53d-3280ab0e959d 00:07:55.704 12:59:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ae91e3be-a93b-4b6b-8062-37cac6478b8b 00:07:55.963 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:55.963 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:55.963 00:07:55.963 real 0m17.044s 00:07:55.963 user 0m43.996s 00:07:55.963 sys 0m3.875s 00:07:55.963 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.963 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:55.963 ************************************ 00:07:55.963 END TEST lvs_grow_dirty 00:07:55.963 ************************************ 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:56.222 nvmf_trace.0 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:56.222 rmmod nvme_tcp 00:07:56.222 rmmod nvme_fabrics 00:07:56.222 rmmod nvme_keyring 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2701032 ']' 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2701032 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2701032 ']' 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2701032 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2701032 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2701032' 00:07:56.222 killing process with pid 2701032 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2701032 00:07:56.222 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2701032 00:07:56.482 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:56.482 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:56.482 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:56.482 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:56.482 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:56.482 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:56.482 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:56.482 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:56.482 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:56.482 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.482 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.482 12:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.388 13:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:58.388 00:07:58.388 real 0m41.994s 00:07:58.388 user 1m4.812s 00:07:58.388 sys 0m10.372s 00:07:58.388 13:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.388 13:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:58.388 ************************************ 00:07:58.388 END TEST nvmf_lvs_grow 00:07:58.388 ************************************ 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:58.647 ************************************ 00:07:58.647 START TEST nvmf_bdev_io_wait 00:07:58.647 ************************************ 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:58.647 * Looking for test storage... 00:07:58.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.647 13:00:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:58.647 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:58.647 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.647 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:58.647 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.647 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:58.647 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:58.647 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.647 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:58.647 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.647 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.647 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.647 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:58.648 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.648 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:58.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.648 --rc genhtml_branch_coverage=1 00:07:58.648 --rc genhtml_function_coverage=1 00:07:58.648 --rc genhtml_legend=1 00:07:58.648 --rc geninfo_all_blocks=1 00:07:58.648 --rc geninfo_unexecuted_blocks=1 00:07:58.648 00:07:58.648 ' 00:07:58.648 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:58.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.648 --rc genhtml_branch_coverage=1 00:07:58.648 --rc genhtml_function_coverage=1 00:07:58.648 --rc genhtml_legend=1 00:07:58.648 --rc geninfo_all_blocks=1 00:07:58.648 --rc geninfo_unexecuted_blocks=1 00:07:58.648 00:07:58.648 ' 00:07:58.648 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:58.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.648 --rc genhtml_branch_coverage=1 00:07:58.648 --rc genhtml_function_coverage=1 00:07:58.648 --rc genhtml_legend=1 00:07:58.648 --rc geninfo_all_blocks=1 00:07:58.648 --rc geninfo_unexecuted_blocks=1 00:07:58.648 00:07:58.648 ' 00:07:58.648 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:58.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.648 --rc genhtml_branch_coverage=1 00:07:58.648 --rc genhtml_function_coverage=1 00:07:58.648 --rc genhtml_legend=1 00:07:58.648 --rc geninfo_all_blocks=1 00:07:58.648 --rc geninfo_unexecuted_blocks=1 00:07:58.648 00:07:58.648 ' 00:07:58.648 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.648 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:58.648 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.648 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.648 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.648 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.648 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.648 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.648 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.648 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.648 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.648 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.907 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:58.907 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:58.907 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.907 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.907 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.907 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.907 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.907 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:58.907 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.907 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.907 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.907 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.907 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:58.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:58.908 13:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:05.481 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:05.481 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:05.481 Found net devices under 0000:86:00.0: cvl_0_0 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:05.481 Found net devices under 0000:86:00.1: cvl_0_1 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:05.481 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:05.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:05.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:08:05.482 00:08:05.482 --- 10.0.0.2 ping statistics --- 00:08:05.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.482 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:05.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:05.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:08:05.482 00:08:05.482 --- 10.0.0.1 ping statistics --- 00:08:05.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.482 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:05.482 13:00:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:05.482 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:05.482 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:05.482 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:05.482 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:05.482 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2705620 00:08:05.482 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2705620 00:08:05.482 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:05.482 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2705620 ']' 00:08:05.482 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.482 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.482 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.482 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.482 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:05.482 [2024-11-19 13:00:08.077837] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:05.482 [2024-11-19 13:00:08.077882] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.482 [2024-11-19 13:00:08.157020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:05.482 [2024-11-19 13:00:08.198642] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.482 [2024-11-19 13:00:08.198682] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.482 [2024-11-19 13:00:08.198689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:05.482 [2024-11-19 13:00:08.198694] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:05.482 [2024-11-19 13:00:08.198699] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.482 [2024-11-19 13:00:08.200178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.482 [2024-11-19 13:00:08.200285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.482 [2024-11-19 13:00:08.200371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.482 [2024-11-19 13:00:08.200373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.742 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.742 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:05.742 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:05.742 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:05.742 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:05.742 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.742 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:05.742 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.742 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:05.742 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.742 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:05.742 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.742 13:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:05.742 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.742 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:05.742 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.742 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:05.742 [2024-11-19 13:00:09.030600] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.742 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.742 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:05.743 Malloc0 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:05.743 [2024-11-19 13:00:09.078171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2705871 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2705873 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:05.743 { 00:08:05.743 "params": { 00:08:05.743 "name": "Nvme$subsystem", 00:08:05.743 "trtype": "$TEST_TRANSPORT", 00:08:05.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:05.743 "adrfam": "ipv4", 00:08:05.743 "trsvcid": "$NVMF_PORT", 00:08:05.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:05.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:05.743 "hdgst": ${hdgst:-false}, 00:08:05.743 "ddgst": ${ddgst:-false} 00:08:05.743 }, 00:08:05.743 "method": "bdev_nvme_attach_controller" 00:08:05.743 } 00:08:05.743 EOF 00:08:05.743 )") 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2705875 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:05.743 { 00:08:05.743 "params": { 00:08:05.743 "name": "Nvme$subsystem", 00:08:05.743 "trtype": "$TEST_TRANSPORT", 00:08:05.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:05.743 "adrfam": "ipv4", 00:08:05.743 "trsvcid": "$NVMF_PORT", 00:08:05.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:05.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:05.743 "hdgst": ${hdgst:-false}, 00:08:05.743 "ddgst": ${ddgst:-false} 00:08:05.743 }, 00:08:05.743 "method": "bdev_nvme_attach_controller" 00:08:05.743 } 00:08:05.743 EOF 00:08:05.743 )") 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2705878 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:05.743 { 00:08:05.743 "params": { 00:08:05.743 "name": "Nvme$subsystem", 00:08:05.743 "trtype": "$TEST_TRANSPORT", 00:08:05.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:05.743 "adrfam": "ipv4", 00:08:05.743 "trsvcid": "$NVMF_PORT", 00:08:05.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:05.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:05.743 "hdgst": ${hdgst:-false}, 00:08:05.743 "ddgst": ${ddgst:-false} 00:08:05.743 }, 00:08:05.743 "method": "bdev_nvme_attach_controller" 00:08:05.743 } 00:08:05.743 EOF 00:08:05.743 )") 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:05.743 { 00:08:05.743 "params": { 00:08:05.743 "name": "Nvme$subsystem", 00:08:05.743 "trtype": "$TEST_TRANSPORT", 00:08:05.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:05.743 "adrfam": "ipv4", 00:08:05.743 "trsvcid": "$NVMF_PORT", 00:08:05.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:05.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:05.743 "hdgst": ${hdgst:-false}, 00:08:05.743 "ddgst": ${ddgst:-false} 00:08:05.743 }, 00:08:05.743 "method": "bdev_nvme_attach_controller" 00:08:05.743 } 00:08:05.743 EOF 00:08:05.743 )") 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2705871 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:05.743 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:05.743 "params": { 00:08:05.743 "name": "Nvme1", 00:08:05.744 "trtype": "tcp", 00:08:05.744 "traddr": "10.0.0.2", 00:08:05.744 "adrfam": "ipv4", 00:08:05.744 "trsvcid": "4420", 00:08:05.744 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:05.744 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:05.744 "hdgst": false, 00:08:05.744 "ddgst": false 00:08:05.744 }, 00:08:05.744 "method": "bdev_nvme_attach_controller" 00:08:05.744 }' 00:08:05.744 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:05.744 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:05.744 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:05.744 "params": { 00:08:05.744 "name": "Nvme1", 00:08:05.744 "trtype": "tcp", 00:08:05.744 "traddr": "10.0.0.2", 00:08:05.744 "adrfam": "ipv4", 00:08:05.744 "trsvcid": "4420", 00:08:05.744 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:05.744 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:05.744 "hdgst": false, 00:08:05.744 "ddgst": false 00:08:05.744 }, 00:08:05.744 "method": "bdev_nvme_attach_controller" 00:08:05.744 }' 00:08:05.744 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:05.744 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:05.744 "params": { 00:08:05.744 "name": "Nvme1", 00:08:05.744 "trtype": "tcp", 00:08:05.744 "traddr": "10.0.0.2", 00:08:05.744 "adrfam": "ipv4", 00:08:05.744 "trsvcid": "4420", 00:08:05.744 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:05.744 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:05.744 "hdgst": false, 00:08:05.744 "ddgst": false 00:08:05.744 }, 00:08:05.744 "method": "bdev_nvme_attach_controller" 00:08:05.744 }' 00:08:05.744 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:05.744 13:00:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:05.744 "params": { 00:08:05.744 "name": "Nvme1", 00:08:05.744 "trtype": "tcp", 00:08:05.744 "traddr": "10.0.0.2", 00:08:05.744 "adrfam": "ipv4", 00:08:05.744 "trsvcid": "4420", 00:08:05.744 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:05.744 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:05.744 "hdgst": false, 00:08:05.744 "ddgst": false 00:08:05.744 }, 00:08:05.744 "method": "bdev_nvme_attach_controller" 00:08:05.744 }' 00:08:06.002 [2024-11-19 13:00:09.128404] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:06.002 [2024-11-19 13:00:09.128406] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:06.002 [2024-11-19 13:00:09.128454] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 13:00:09.128455] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:06.002 --proc-type=auto ] 00:08:06.002 [2024-11-19 13:00:09.128846] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:06.002 [2024-11-19 13:00:09.128883] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:06.002 [2024-11-19 13:00:09.134883] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:06.002 [2024-11-19 13:00:09.134928] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:06.002 [2024-11-19 13:00:09.316617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.002 [2024-11-19 13:00:09.359625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:06.261 [2024-11-19 13:00:09.412418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.261 [2024-11-19 13:00:09.455479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:06.261 [2024-11-19 13:00:09.505508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.261 [2024-11-19 13:00:09.550509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.261 [2024-11-19 13:00:09.565509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:06.261 [2024-11-19 13:00:09.593583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:06.520 Running I/O for 1 seconds... 00:08:06.520 Running I/O for 1 seconds... 00:08:06.520 Running I/O for 1 seconds... 00:08:06.520 Running I/O for 1 seconds... 00:08:07.456 7843.00 IOPS, 30.64 MiB/s 00:08:07.456 Latency(us) 00:08:07.456 [2024-11-19T12:00:10.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.456 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:07.456 Nvme1n1 : 1.02 7874.51 30.76 0.00 0.00 16128.84 6639.08 27012.23 00:08:07.456 [2024-11-19T12:00:10.833Z] =================================================================================================================== 00:08:07.456 [2024-11-19T12:00:10.833Z] Total : 7874.51 30.76 0.00 0.00 16128.84 6639.08 27012.23 00:08:07.456 246080.00 IOPS, 961.25 MiB/s 00:08:07.456 Latency(us) 00:08:07.456 [2024-11-19T12:00:10.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.456 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:07.456 Nvme1n1 : 1.00 245700.28 959.77 0.00 0.00 518.80 233.29 1545.79 00:08:07.456 [2024-11-19T12:00:10.833Z] =================================================================================================================== 00:08:07.456 [2024-11-19T12:00:10.833Z] Total : 245700.28 959.77 0.00 0.00 518.80 233.29 1545.79 00:08:07.456 7300.00 IOPS, 28.52 MiB/s 00:08:07.456 Latency(us) 00:08:07.456 [2024-11-19T12:00:10.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.456 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:07.456 Nvme1n1 : 1.01 7386.84 28.85 0.00 0.00 17274.51 5071.92 30773.43 00:08:07.456 [2024-11-19T12:00:10.833Z] =================================================================================================================== 00:08:07.456 [2024-11-19T12:00:10.833Z] Total : 7386.84 28.85 0.00 0.00 17274.51 5071.92 30773.43 00:08:07.456 12137.00 IOPS, 47.41 MiB/s 00:08:07.456 Latency(us) 00:08:07.456 [2024-11-19T12:00:10.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.456 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:07.456 Nvme1n1 : 1.01 12213.35 47.71 0.00 0.00 10453.67 3433.52 18350.08 00:08:07.456 [2024-11-19T12:00:10.833Z] =================================================================================================================== 00:08:07.456 [2024-11-19T12:00:10.833Z] Total : 12213.35 47.71 0.00 0.00 10453.67 3433.52 18350.08 00:08:07.456 13:00:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2705873 00:08:07.715 13:00:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2705875 00:08:07.716 13:00:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2705878 00:08:07.716 13:00:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:07.716 13:00:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.716 13:00:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.716 13:00:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.716 13:00:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:07.716 13:00:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:07.716 13:00:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:07.716 13:00:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:07.716 13:00:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:07.716 13:00:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:07.716 13:00:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:07.716 13:00:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:07.716 rmmod nvme_tcp 00:08:07.716 rmmod nvme_fabrics 00:08:07.716 rmmod nvme_keyring 00:08:07.716 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:07.716 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:07.716 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:07.716 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2705620 ']' 00:08:07.716 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2705620 00:08:07.716 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2705620 ']' 00:08:07.716 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2705620 00:08:07.716 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:07.716 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.716 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2705620 00:08:07.716 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.716 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.716 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2705620' 00:08:07.716 killing process with pid 2705620 00:08:07.716 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2705620 00:08:07.716 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2705620 00:08:07.976 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:07.976 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:07.976 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:07.976 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:07.976 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:07.976 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:07.976 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:07.976 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:07.976 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:07.976 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.976 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.976 13:00:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.514 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:10.515 00:08:10.515 real 0m11.457s 00:08:10.515 user 0m19.010s 00:08:10.515 sys 0m6.180s 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.515 ************************************ 00:08:10.515 END TEST nvmf_bdev_io_wait 00:08:10.515 ************************************ 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:10.515 ************************************ 00:08:10.515 START TEST nvmf_queue_depth 00:08:10.515 ************************************ 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:10.515 * Looking for test storage... 00:08:10.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:10.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.515 --rc genhtml_branch_coverage=1 00:08:10.515 --rc genhtml_function_coverage=1 00:08:10.515 --rc genhtml_legend=1 00:08:10.515 --rc geninfo_all_blocks=1 00:08:10.515 --rc geninfo_unexecuted_blocks=1 00:08:10.515 00:08:10.515 ' 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:10.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.515 --rc genhtml_branch_coverage=1 00:08:10.515 --rc genhtml_function_coverage=1 00:08:10.515 --rc genhtml_legend=1 00:08:10.515 --rc geninfo_all_blocks=1 00:08:10.515 --rc geninfo_unexecuted_blocks=1 00:08:10.515 00:08:10.515 ' 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:10.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.515 --rc genhtml_branch_coverage=1 00:08:10.515 --rc genhtml_function_coverage=1 00:08:10.515 --rc genhtml_legend=1 00:08:10.515 --rc geninfo_all_blocks=1 00:08:10.515 --rc geninfo_unexecuted_blocks=1 00:08:10.515 00:08:10.515 ' 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:10.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.515 --rc genhtml_branch_coverage=1 00:08:10.515 --rc genhtml_function_coverage=1 00:08:10.515 --rc genhtml_legend=1 00:08:10.515 --rc geninfo_all_blocks=1 00:08:10.515 --rc geninfo_unexecuted_blocks=1 00:08:10.515 00:08:10.515 ' 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.515 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:10.516 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:10.516 13:00:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.090 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:17.090 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:17.090 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:17.090 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:17.090 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:17.090 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:17.090 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:17.090 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:17.090 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:17.090 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:17.090 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:17.090 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:17.090 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:17.090 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:17.090 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:17.091 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:17.091 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:17.091 Found net devices under 0000:86:00.0: cvl_0_0 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:17.091 Found net devices under 0000:86:00.1: cvl_0_1 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:17.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:08:17.091 00:08:17.091 --- 10.0.0.2 ping statistics --- 00:08:17.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.091 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:17.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:08:17.091 00:08:17.091 --- 10.0.0.1 ping statistics --- 00:08:17.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.091 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:17.091 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2709674 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2709674 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2709674 ']' 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.092 [2024-11-19 13:00:19.571504] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:17.092 [2024-11-19 13:00:19.571549] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.092 [2024-11-19 13:00:19.652156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.092 [2024-11-19 13:00:19.693610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.092 [2024-11-19 13:00:19.693648] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.092 [2024-11-19 13:00:19.693656] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.092 [2024-11-19 13:00:19.693665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.092 [2024-11-19 13:00:19.693670] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.092 [2024-11-19 13:00:19.694257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.092 [2024-11-19 13:00:19.830130] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.092 Malloc0 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.092 [2024-11-19 13:00:19.880565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2709898 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2709898 /var/tmp/bdevperf.sock 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2709898 ']' 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:17.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.092 13:00:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.092 [2024-11-19 13:00:19.931731] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:17.092 [2024-11-19 13:00:19.931774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2709898 ] 00:08:17.092 [2024-11-19 13:00:20.006415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.092 [2024-11-19 13:00:20.055703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.092 13:00:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.092 13:00:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:17.092 13:00:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:17.092 13:00:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.092 13:00:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.092 NVMe0n1 00:08:17.092 13:00:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.092 13:00:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:17.092 Running I/O for 10 seconds... 00:08:19.406 11269.00 IOPS, 44.02 MiB/s [2024-11-19T12:00:23.718Z] 11776.00 IOPS, 46.00 MiB/s [2024-11-19T12:00:24.654Z] 11919.00 IOPS, 46.56 MiB/s [2024-11-19T12:00:25.590Z] 12007.00 IOPS, 46.90 MiB/s [2024-11-19T12:00:26.527Z] 12059.60 IOPS, 47.11 MiB/s [2024-11-19T12:00:27.464Z] 12090.83 IOPS, 47.23 MiB/s [2024-11-19T12:00:28.839Z] 12008.29 IOPS, 46.91 MiB/s [2024-11-19T12:00:29.772Z] 12021.00 IOPS, 46.96 MiB/s [2024-11-19T12:00:30.708Z] 12064.00 IOPS, 47.12 MiB/s [2024-11-19T12:00:30.709Z] 12065.80 IOPS, 47.13 MiB/s 00:08:27.332 Latency(us) 00:08:27.332 [2024-11-19T12:00:30.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.332 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:27.332 Verification LBA range: start 0x0 length 0x4000 00:08:27.332 NVMe0n1 : 10.06 12088.69 47.22 0.00 0.00 84430.53 19831.76 56076.02 00:08:27.332 [2024-11-19T12:00:30.709Z] =================================================================================================================== 00:08:27.332 [2024-11-19T12:00:30.709Z] Total : 12088.69 47.22 0.00 0.00 84430.53 19831.76 56076.02 00:08:27.332 { 00:08:27.332 "results": [ 00:08:27.332 { 00:08:27.332 "job": "NVMe0n1", 00:08:27.332 "core_mask": "0x1", 00:08:27.332 "workload": "verify", 00:08:27.332 "status": "finished", 00:08:27.332 "verify_range": { 00:08:27.332 "start": 0, 00:08:27.332 "length": 16384 00:08:27.332 }, 00:08:27.332 "queue_depth": 1024, 00:08:27.332 "io_size": 4096, 00:08:27.332 "runtime": 10.063951, 00:08:27.332 "iops": 12088.691608295787, 00:08:27.332 "mibps": 47.22145159490542, 00:08:27.332 "io_failed": 0, 00:08:27.332 "io_timeout": 0, 00:08:27.332 "avg_latency_us": 84430.52686966528, 00:08:27.332 "min_latency_us": 19831.76347826087, 00:08:27.332 "max_latency_us": 56076.02086956522 00:08:27.332 } 00:08:27.332 ], 00:08:27.332 "core_count": 1 00:08:27.332 } 00:08:27.332 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2709898 00:08:27.332 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2709898 ']' 00:08:27.332 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2709898 00:08:27.332 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:27.332 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.332 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2709898 00:08:27.332 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:27.332 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:27.332 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2709898' 00:08:27.332 killing process with pid 2709898 00:08:27.332 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2709898 00:08:27.332 Received shutdown signal, test time was about 10.000000 seconds 00:08:27.332 00:08:27.332 Latency(us) 00:08:27.332 [2024-11-19T12:00:30.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.332 [2024-11-19T12:00:30.709Z] =================================================================================================================== 00:08:27.332 [2024-11-19T12:00:30.709Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:27.332 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2709898 00:08:27.591 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:27.591 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:27.591 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:27.591 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:27.591 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:27.591 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:27.591 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:27.591 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:27.591 rmmod nvme_tcp 00:08:27.591 rmmod nvme_fabrics 00:08:27.591 rmmod nvme_keyring 00:08:27.591 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:27.591 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:27.591 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:27.591 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2709674 ']' 00:08:27.591 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2709674 00:08:27.591 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2709674 ']' 00:08:27.591 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2709674 00:08:27.591 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:27.591 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.591 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2709674 00:08:27.591 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:27.591 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:27.591 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2709674' 00:08:27.591 killing process with pid 2709674 00:08:27.591 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2709674 00:08:27.591 13:00:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2709674 00:08:27.850 13:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:27.850 13:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:27.850 13:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:27.850 13:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:27.850 13:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:27.850 13:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:27.850 13:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:27.850 13:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:27.850 13:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:27.850 13:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.850 13:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.850 13:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.755 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:29.755 00:08:29.755 real 0m19.734s 00:08:29.755 user 0m23.102s 00:08:29.755 sys 0m6.090s 00:08:29.755 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.755 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:29.755 ************************************ 00:08:29.755 END TEST nvmf_queue_depth 00:08:29.755 ************************************ 00:08:30.015 13:00:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:30.015 13:00:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:30.015 13:00:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.015 13:00:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.015 ************************************ 00:08:30.015 START TEST nvmf_target_multipath 00:08:30.015 ************************************ 00:08:30.015 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:30.015 * Looking for test storage... 00:08:30.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.015 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:30.015 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:30.015 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:30.015 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:30.015 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.015 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.015 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.015 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.015 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:30.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.016 --rc genhtml_branch_coverage=1 00:08:30.016 --rc genhtml_function_coverage=1 00:08:30.016 --rc genhtml_legend=1 00:08:30.016 --rc geninfo_all_blocks=1 00:08:30.016 --rc geninfo_unexecuted_blocks=1 00:08:30.016 00:08:30.016 ' 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:30.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.016 --rc genhtml_branch_coverage=1 00:08:30.016 --rc genhtml_function_coverage=1 00:08:30.016 --rc genhtml_legend=1 00:08:30.016 --rc geninfo_all_blocks=1 00:08:30.016 --rc geninfo_unexecuted_blocks=1 00:08:30.016 00:08:30.016 ' 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:30.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.016 --rc genhtml_branch_coverage=1 00:08:30.016 --rc genhtml_function_coverage=1 00:08:30.016 --rc genhtml_legend=1 00:08:30.016 --rc geninfo_all_blocks=1 00:08:30.016 --rc geninfo_unexecuted_blocks=1 00:08:30.016 00:08:30.016 ' 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:30.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.016 --rc genhtml_branch_coverage=1 00:08:30.016 --rc genhtml_function_coverage=1 00:08:30.016 --rc genhtml_legend=1 00:08:30.016 --rc geninfo_all_blocks=1 00:08:30.016 --rc geninfo_unexecuted_blocks=1 00:08:30.016 00:08:30.016 ' 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:30.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:30.016 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.017 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:30.017 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:30.017 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:30.017 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.017 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.017 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.017 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:30.017 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:30.017 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:30.017 13:00:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:36.591 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:36.591 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.591 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:36.592 Found net devices under 0000:86:00.0: cvl_0_0 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:36.592 Found net devices under 0000:86:00.1: cvl_0_1 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:36.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:08:36.592 00:08:36.592 --- 10.0.0.2 ping statistics --- 00:08:36.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.592 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:36.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:08:36.592 00:08:36.592 --- 10.0.0.1 ping statistics --- 00:08:36.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.592 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:36.592 only one NIC for nvmf test 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:36.592 rmmod nvme_tcp 00:08:36.592 rmmod nvme_fabrics 00:08:36.592 rmmod nvme_keyring 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.592 13:00:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.501 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:38.501 00:08:38.501 real 0m8.408s 00:08:38.501 user 0m1.784s 00:08:38.501 sys 0m4.644s 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:38.502 ************************************ 00:08:38.502 END TEST nvmf_target_multipath 00:08:38.502 ************************************ 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:38.502 ************************************ 00:08:38.502 START TEST nvmf_zcopy 00:08:38.502 ************************************ 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:38.502 * Looking for test storage... 00:08:38.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:38.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.502 --rc genhtml_branch_coverage=1 00:08:38.502 --rc genhtml_function_coverage=1 00:08:38.502 --rc genhtml_legend=1 00:08:38.502 --rc geninfo_all_blocks=1 00:08:38.502 --rc geninfo_unexecuted_blocks=1 00:08:38.502 00:08:38.502 ' 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:38.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.502 --rc genhtml_branch_coverage=1 00:08:38.502 --rc genhtml_function_coverage=1 00:08:38.502 --rc genhtml_legend=1 00:08:38.502 --rc geninfo_all_blocks=1 00:08:38.502 --rc geninfo_unexecuted_blocks=1 00:08:38.502 00:08:38.502 ' 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:38.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.502 --rc genhtml_branch_coverage=1 00:08:38.502 --rc genhtml_function_coverage=1 00:08:38.502 --rc genhtml_legend=1 00:08:38.502 --rc geninfo_all_blocks=1 00:08:38.502 --rc geninfo_unexecuted_blocks=1 00:08:38.502 00:08:38.502 ' 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:38.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.502 --rc genhtml_branch_coverage=1 00:08:38.502 --rc genhtml_function_coverage=1 00:08:38.502 --rc genhtml_legend=1 00:08:38.502 --rc geninfo_all_blocks=1 00:08:38.502 --rc geninfo_unexecuted_blocks=1 00:08:38.502 00:08:38.502 ' 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:38.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.502 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.503 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.503 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:38.503 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:38.503 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:38.503 13:00:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:45.079 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:45.079 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:45.079 Found net devices under 0000:86:00.0: cvl_0_0 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:45.079 Found net devices under 0000:86:00.1: cvl_0_1 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.079 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:45.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:08:45.080 00:08:45.080 --- 10.0.0.2 ping statistics --- 00:08:45.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.080 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:08:45.080 00:08:45.080 --- 10.0.0.1 ping statistics --- 00:08:45.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.080 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2718805 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2718805 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2718805 ']' 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.080 13:00:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.080 [2024-11-19 13:00:47.897337] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:45.080 [2024-11-19 13:00:47.897388] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.080 [2024-11-19 13:00:47.977706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.080 [2024-11-19 13:00:48.016672] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.080 [2024-11-19 13:00:48.016705] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.080 [2024-11-19 13:00:48.016712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.080 [2024-11-19 13:00:48.016719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.080 [2024-11-19 13:00:48.016724] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.080 [2024-11-19 13:00:48.017300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.080 [2024-11-19 13:00:48.159904] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.080 [2024-11-19 13:00:48.180118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.080 malloc0 00:08:45.080 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.081 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:45.081 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.081 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.081 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.081 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:45.081 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:45.081 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:45.081 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:45.081 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:45.081 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:45.081 { 00:08:45.081 "params": { 00:08:45.081 "name": "Nvme$subsystem", 00:08:45.081 "trtype": "$TEST_TRANSPORT", 00:08:45.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.081 "adrfam": "ipv4", 00:08:45.081 "trsvcid": "$NVMF_PORT", 00:08:45.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.081 "hdgst": ${hdgst:-false}, 00:08:45.081 "ddgst": ${ddgst:-false} 00:08:45.081 }, 00:08:45.081 "method": "bdev_nvme_attach_controller" 00:08:45.081 } 00:08:45.081 EOF 00:08:45.081 )") 00:08:45.081 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:45.081 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:45.081 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:45.081 13:00:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:45.081 "params": { 00:08:45.081 "name": "Nvme1", 00:08:45.081 "trtype": "tcp", 00:08:45.081 "traddr": "10.0.0.2", 00:08:45.081 "adrfam": "ipv4", 00:08:45.081 "trsvcid": "4420", 00:08:45.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.081 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.081 "hdgst": false, 00:08:45.081 "ddgst": false 00:08:45.081 }, 00:08:45.081 "method": "bdev_nvme_attach_controller" 00:08:45.081 }' 00:08:45.081 [2024-11-19 13:00:48.265341] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:45.081 [2024-11-19 13:00:48.265382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2718825 ] 00:08:45.081 [2024-11-19 13:00:48.340759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.081 [2024-11-19 13:00:48.382267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.340 Running I/O for 10 seconds... 00:08:47.653 8338.00 IOPS, 65.14 MiB/s [2024-11-19T12:00:51.966Z] 8394.00 IOPS, 65.58 MiB/s [2024-11-19T12:00:52.919Z] 8444.67 IOPS, 65.97 MiB/s [2024-11-19T12:00:53.968Z] 8484.25 IOPS, 66.28 MiB/s [2024-11-19T12:00:54.905Z] 8496.80 IOPS, 66.38 MiB/s [2024-11-19T12:00:55.842Z] 8504.67 IOPS, 66.44 MiB/s [2024-11-19T12:00:56.778Z] 8518.86 IOPS, 66.55 MiB/s [2024-11-19T12:00:57.716Z] 8528.88 IOPS, 66.63 MiB/s [2024-11-19T12:00:59.096Z] 8532.56 IOPS, 66.66 MiB/s [2024-11-19T12:00:59.096Z] 8533.80 IOPS, 66.67 MiB/s 00:08:55.719 Latency(us) 00:08:55.719 [2024-11-19T12:00:59.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.719 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:55.719 Verification LBA range: start 0x0 length 0x1000 00:08:55.719 Nvme1n1 : 10.01 8536.81 66.69 0.00 0.00 14951.62 1709.63 24162.84 00:08:55.719 [2024-11-19T12:00:59.096Z] =================================================================================================================== 00:08:55.719 [2024-11-19T12:00:59.096Z] Total : 8536.81 66.69 0.00 0.00 14951.62 1709.63 24162.84 00:08:55.719 13:00:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2720661 00:08:55.719 13:00:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:55.719 13:00:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.719 13:00:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:55.719 13:00:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:55.719 13:00:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:55.719 13:00:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:55.719 13:00:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:55.719 13:00:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:55.719 { 00:08:55.719 "params": { 00:08:55.719 "name": "Nvme$subsystem", 00:08:55.719 "trtype": "$TEST_TRANSPORT", 00:08:55.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.719 "adrfam": "ipv4", 00:08:55.719 "trsvcid": "$NVMF_PORT", 00:08:55.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.719 "hdgst": ${hdgst:-false}, 00:08:55.719 "ddgst": ${ddgst:-false} 00:08:55.719 }, 00:08:55.719 "method": "bdev_nvme_attach_controller" 00:08:55.719 } 00:08:55.719 EOF 00:08:55.719 )") 00:08:55.719 13:00:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:55.719 [2024-11-19 13:00:58.859926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.719 [2024-11-19 13:00:58.859968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.719 13:00:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:55.719 13:00:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:55.719 13:00:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:55.719 "params": { 00:08:55.719 "name": "Nvme1", 00:08:55.719 "trtype": "tcp", 00:08:55.719 "traddr": "10.0.0.2", 00:08:55.719 "adrfam": "ipv4", 00:08:55.719 "trsvcid": "4420", 00:08:55.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:55.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:55.719 "hdgst": false, 00:08:55.719 "ddgst": false 00:08:55.719 }, 00:08:55.719 "method": "bdev_nvme_attach_controller" 00:08:55.719 }' 00:08:55.719 [2024-11-19 13:00:58.871930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.719 [2024-11-19 13:00:58.871942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.719 [2024-11-19 13:00:58.883959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.719 [2024-11-19 13:00:58.883969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.719 [2024-11-19 13:00:58.895986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.719 [2024-11-19 13:00:58.895996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.719 [2024-11-19 13:00:58.900667] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:55.719 [2024-11-19 13:00:58.900710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2720661 ] 00:08:55.719 [2024-11-19 13:00:58.908025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.719 [2024-11-19 13:00:58.908039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.719 [2024-11-19 13:00:58.920119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.719 [2024-11-19 13:00:58.920133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.719 [2024-11-19 13:00:58.932148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.719 [2024-11-19 13:00:58.932158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.719 [2024-11-19 13:00:58.944179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.719 [2024-11-19 13:00:58.944189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.719 [2024-11-19 13:00:58.956221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.719 [2024-11-19 13:00:58.956231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.719 [2024-11-19 13:00:58.968250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.719 [2024-11-19 13:00:58.968259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.719 [2024-11-19 13:00:58.975024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.719 [2024-11-19 13:00:58.980276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.719 [2024-11-19 13:00:58.980286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.719 [2024-11-19 13:00:58.992309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.719 [2024-11-19 13:00:58.992323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.719 [2024-11-19 13:00:59.004343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.719 [2024-11-19 13:00:59.004354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.719 [2024-11-19 13:00:59.016378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.719 [2024-11-19 13:00:59.016391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.719 [2024-11-19 13:00:59.017405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.719 [2024-11-19 13:00:59.028418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.719 [2024-11-19 13:00:59.028434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.719 [2024-11-19 13:00:59.040449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.719 [2024-11-19 13:00:59.040468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.719 [2024-11-19 13:00:59.052478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.719 [2024-11-19 13:00:59.052491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.719 [2024-11-19 13:00:59.064508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.719 [2024-11-19 13:00:59.064521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.719 [2024-11-19 13:00:59.076541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.719 [2024-11-19 13:00:59.076555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.719 [2024-11-19 13:00:59.088569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.719 [2024-11-19 13:00:59.088581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.983 [2024-11-19 13:00:59.100599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.983 [2024-11-19 13:00:59.100610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.983 [2024-11-19 13:00:59.112650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.983 [2024-11-19 13:00:59.112670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.983 [2024-11-19 13:00:59.124674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.983 [2024-11-19 13:00:59.124688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.983 [2024-11-19 13:00:59.136707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.983 [2024-11-19 13:00:59.136719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.983 [2024-11-19 13:00:59.148736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.983 [2024-11-19 13:00:59.148745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.983 [2024-11-19 13:00:59.160766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.984 [2024-11-19 13:00:59.160775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.984 [2024-11-19 13:00:59.172807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.984 [2024-11-19 13:00:59.172822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.984 [2024-11-19 13:00:59.184839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.984 [2024-11-19 13:00:59.184853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.984 [2024-11-19 13:00:59.196868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.984 [2024-11-19 13:00:59.196882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.984 [2024-11-19 13:00:59.208902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.984 [2024-11-19 13:00:59.208913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.984 [2024-11-19 13:00:59.220933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.984 [2024-11-19 13:00:59.220942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.984 [2024-11-19 13:00:59.232976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.984 [2024-11-19 13:00:59.232990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.984 [2024-11-19 13:00:59.245007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.984 [2024-11-19 13:00:59.245017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.985 [2024-11-19 13:00:59.257031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.985 [2024-11-19 13:00:59.257040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.985 [2024-11-19 13:00:59.269062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.985 [2024-11-19 13:00:59.269073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.985 [2024-11-19 13:00:59.281095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.985 [2024-11-19 13:00:59.281107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.985 [2024-11-19 13:00:59.293128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.985 [2024-11-19 13:00:59.293137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.985 [2024-11-19 13:00:59.305159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.985 [2024-11-19 13:00:59.305169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.985 [2024-11-19 13:00:59.317193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.985 [2024-11-19 13:00:59.317215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.985 [2024-11-19 13:00:59.329237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.985 [2024-11-19 13:00:59.329255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.985 Running I/O for 5 seconds... 00:08:55.985 [2024-11-19 13:00:59.341260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.986 [2024-11-19 13:00:59.341270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.986 [2024-11-19 13:00:59.356550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.986 [2024-11-19 13:00:59.356569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-11-19 13:00:59.370921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-11-19 13:00:59.370941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-11-19 13:00:59.384955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-11-19 13:00:59.384974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-11-19 13:00:59.399075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-11-19 13:00:59.399095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-11-19 13:00:59.413170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-11-19 13:00:59.413190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-11-19 13:00:59.426981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-11-19 13:00:59.426999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-11-19 13:00:59.440868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-11-19 13:00:59.440887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-11-19 13:00:59.454880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-11-19 13:00:59.454900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-11-19 13:00:59.469198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-11-19 13:00:59.469227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-11-19 13:00:59.483263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-11-19 13:00:59.483283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-11-19 13:00:59.497365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-11-19 13:00:59.497384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-11-19 13:00:59.511671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-11-19 13:00:59.511690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-11-19 13:00:59.525969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-11-19 13:00:59.525987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-11-19 13:00:59.541402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-11-19 13:00:59.541423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-11-19 13:00:59.555527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-11-19 13:00:59.555545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-11-19 13:00:59.569495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-11-19 13:00:59.569514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-11-19 13:00:59.583776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-11-19 13:00:59.583795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-11-19 13:00:59.597822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-11-19 13:00:59.597840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.249 [2024-11-19 13:00:59.611838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.249 [2024-11-19 13:00:59.611856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.509 [2024-11-19 13:00:59.625968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.509 [2024-11-19 13:00:59.625987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.509 [2024-11-19 13:00:59.640183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.509 [2024-11-19 13:00:59.640201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.509 [2024-11-19 13:00:59.650687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.509 [2024-11-19 13:00:59.650705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.509 [2024-11-19 13:00:59.665301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.509 [2024-11-19 13:00:59.665319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.509 [2024-11-19 13:00:59.678973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.509 [2024-11-19 13:00:59.678992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.509 [2024-11-19 13:00:59.693191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.509 [2024-11-19 13:00:59.693210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.509 [2024-11-19 13:00:59.706614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.509 [2024-11-19 13:00:59.706633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.509 [2024-11-19 13:00:59.721202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.509 [2024-11-19 13:00:59.721231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.509 [2024-11-19 13:00:59.737102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.509 [2024-11-19 13:00:59.737120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.509 [2024-11-19 13:00:59.751145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.509 [2024-11-19 13:00:59.751164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.509 [2024-11-19 13:00:59.765220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.509 [2024-11-19 13:00:59.765239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.509 [2024-11-19 13:00:59.776060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.509 [2024-11-19 13:00:59.776080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.509 [2024-11-19 13:00:59.790413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.509 [2024-11-19 13:00:59.790433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.509 [2024-11-19 13:00:59.804257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.509 [2024-11-19 13:00:59.804277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.509 [2024-11-19 13:00:59.818305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.509 [2024-11-19 13:00:59.818324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.509 [2024-11-19 13:00:59.832645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.509 [2024-11-19 13:00:59.832665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.509 [2024-11-19 13:00:59.847086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.509 [2024-11-19 13:00:59.847105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.509 [2024-11-19 13:00:59.857884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.509 [2024-11-19 13:00:59.857904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.509 [2024-11-19 13:00:59.872669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.509 [2024-11-19 13:00:59.872692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.509 [2024-11-19 13:00:59.883387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.509 [2024-11-19 13:00:59.883406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.769 [2024-11-19 13:00:59.897763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.769 [2024-11-19 13:00:59.897782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.769 [2024-11-19 13:00:59.911669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.769 [2024-11-19 13:00:59.911692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.769 [2024-11-19 13:00:59.921242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.769 [2024-11-19 13:00:59.921262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.769 [2024-11-19 13:00:59.936036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.769 [2024-11-19 13:00:59.936055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.769 [2024-11-19 13:00:59.949869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.769 [2024-11-19 13:00:59.949888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.769 [2024-11-19 13:00:59.963844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.769 [2024-11-19 13:00:59.963864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.769 [2024-11-19 13:00:59.977649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.769 [2024-11-19 13:00:59.977667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.769 [2024-11-19 13:00:59.991461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.769 [2024-11-19 13:00:59.991479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.769 [2024-11-19 13:01:00.005707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.769 [2024-11-19 13:01:00.005727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.769 [2024-11-19 13:01:00.016135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.769 [2024-11-19 13:01:00.016154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.769 [2024-11-19 13:01:00.030771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.769 [2024-11-19 13:01:00.030791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.769 [2024-11-19 13:01:00.044674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.769 [2024-11-19 13:01:00.044693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.769 [2024-11-19 13:01:00.059109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.769 [2024-11-19 13:01:00.059131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.769 [2024-11-19 13:01:00.073256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.769 [2024-11-19 13:01:00.073275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.769 [2024-11-19 13:01:00.087266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.769 [2024-11-19 13:01:00.087285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.769 [2024-11-19 13:01:00.101717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.769 [2024-11-19 13:01:00.101735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.769 [2024-11-19 13:01:00.117006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.769 [2024-11-19 13:01:00.117025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.769 [2024-11-19 13:01:00.131334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.769 [2024-11-19 13:01:00.131354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.028 [2024-11-19 13:01:00.145474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.028 [2024-11-19 13:01:00.145494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.028 [2024-11-19 13:01:00.159597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.028 [2024-11-19 13:01:00.159618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.028 [2024-11-19 13:01:00.173819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.028 [2024-11-19 13:01:00.173837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.028 [2024-11-19 13:01:00.187770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.028 [2024-11-19 13:01:00.187790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.028 [2024-11-19 13:01:00.201936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.028 [2024-11-19 13:01:00.201960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.028 [2024-11-19 13:01:00.216175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.028 [2024-11-19 13:01:00.216194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.028 [2024-11-19 13:01:00.230119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.028 [2024-11-19 13:01:00.230138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.028 [2024-11-19 13:01:00.244724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.028 [2024-11-19 13:01:00.244742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.029 [2024-11-19 13:01:00.260956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.029 [2024-11-19 13:01:00.260976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.029 [2024-11-19 13:01:00.275425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.029 [2024-11-19 13:01:00.275444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.029 [2024-11-19 13:01:00.286335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.029 [2024-11-19 13:01:00.286353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.029 [2024-11-19 13:01:00.300702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.029 [2024-11-19 13:01:00.300720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.029 [2024-11-19 13:01:00.315166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.029 [2024-11-19 13:01:00.315184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.029 [2024-11-19 13:01:00.326738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.029 [2024-11-19 13:01:00.326756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.029 16425.00 IOPS, 128.32 MiB/s [2024-11-19T12:01:00.406Z] [2024-11-19 13:01:00.341113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.029 [2024-11-19 13:01:00.341135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.029 [2024-11-19 13:01:00.355058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.029 [2024-11-19 13:01:00.355076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.029 [2024-11-19 13:01:00.369467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.029 [2024-11-19 13:01:00.369485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.029 [2024-11-19 13:01:00.380081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.029 [2024-11-19 13:01:00.380099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.029 [2024-11-19 13:01:00.394936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.029 [2024-11-19 13:01:00.394960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.288 [2024-11-19 13:01:00.406238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.288 [2024-11-19 13:01:00.406256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.288 [2024-11-19 13:01:00.415839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.288 [2024-11-19 13:01:00.415858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.288 [2024-11-19 13:01:00.430336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.288 [2024-11-19 13:01:00.430354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.288 [2024-11-19 13:01:00.444319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.288 [2024-11-19 13:01:00.444338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.288 [2024-11-19 13:01:00.455377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.288 [2024-11-19 13:01:00.455395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.288 [2024-11-19 13:01:00.469895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.288 [2024-11-19 13:01:00.469918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.288 [2024-11-19 13:01:00.484117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.288 [2024-11-19 13:01:00.484136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.288 [2024-11-19 13:01:00.495510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.288 [2024-11-19 13:01:00.495529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.288 [2024-11-19 13:01:00.510224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.288 [2024-11-19 13:01:00.510243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.288 [2024-11-19 13:01:00.523538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.288 [2024-11-19 13:01:00.523556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.288 [2024-11-19 13:01:00.537994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.288 [2024-11-19 13:01:00.538012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.288 [2024-11-19 13:01:00.548849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.288 [2024-11-19 13:01:00.548867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.288 [2024-11-19 13:01:00.563460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.288 [2024-11-19 13:01:00.563478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.288 [2024-11-19 13:01:00.574294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.288 [2024-11-19 13:01:00.574311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.288 [2024-11-19 13:01:00.588701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.288 [2024-11-19 13:01:00.588720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.288 [2024-11-19 13:01:00.602813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.288 [2024-11-19 13:01:00.602832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.288 [2024-11-19 13:01:00.616774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.288 [2024-11-19 13:01:00.616792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.288 [2024-11-19 13:01:00.630863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.288 [2024-11-19 13:01:00.630882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.288 [2024-11-19 13:01:00.644643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.288 [2024-11-19 13:01:00.644661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.288 [2024-11-19 13:01:00.658392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.288 [2024-11-19 13:01:00.658411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.547 [2024-11-19 13:01:00.672704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.547 [2024-11-19 13:01:00.672723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.547 [2024-11-19 13:01:00.684054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.547 [2024-11-19 13:01:00.684072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.547 [2024-11-19 13:01:00.698734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.547 [2024-11-19 13:01:00.698753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.547 [2024-11-19 13:01:00.712687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.547 [2024-11-19 13:01:00.712707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.547 [2024-11-19 13:01:00.726877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.547 [2024-11-19 13:01:00.726901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.547 [2024-11-19 13:01:00.740675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.547 [2024-11-19 13:01:00.740695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.547 [2024-11-19 13:01:00.754742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.547 [2024-11-19 13:01:00.754761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.547 [2024-11-19 13:01:00.768656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.547 [2024-11-19 13:01:00.768674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.547 [2024-11-19 13:01:00.782134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.547 [2024-11-19 13:01:00.782153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.547 [2024-11-19 13:01:00.796377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.547 [2024-11-19 13:01:00.796395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.547 [2024-11-19 13:01:00.810670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.547 [2024-11-19 13:01:00.810688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.547 [2024-11-19 13:01:00.824602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.547 [2024-11-19 13:01:00.824620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.548 [2024-11-19 13:01:00.838793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.548 [2024-11-19 13:01:00.838812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.548 [2024-11-19 13:01:00.852516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.548 [2024-11-19 13:01:00.852535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.548 [2024-11-19 13:01:00.866148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.548 [2024-11-19 13:01:00.866168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.548 [2024-11-19 13:01:00.880318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.548 [2024-11-19 13:01:00.880338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.548 [2024-11-19 13:01:00.894557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.548 [2024-11-19 13:01:00.894577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.548 [2024-11-19 13:01:00.908420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.548 [2024-11-19 13:01:00.908439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.548 [2024-11-19 13:01:00.922498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.548 [2024-11-19 13:01:00.922517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.807 [2024-11-19 13:01:00.936669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.807 [2024-11-19 13:01:00.936687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.807 [2024-11-19 13:01:00.950715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.807 [2024-11-19 13:01:00.950734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.807 [2024-11-19 13:01:00.964803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.807 [2024-11-19 13:01:00.964821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.807 [2024-11-19 13:01:00.978932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.807 [2024-11-19 13:01:00.978957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.807 [2024-11-19 13:01:00.992993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.807 [2024-11-19 13:01:00.993016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.807 [2024-11-19 13:01:01.007220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.807 [2024-11-19 13:01:01.007239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.807 [2024-11-19 13:01:01.018452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.807 [2024-11-19 13:01:01.018470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.807 [2024-11-19 13:01:01.032839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.807 [2024-11-19 13:01:01.032858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.807 [2024-11-19 13:01:01.046651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.807 [2024-11-19 13:01:01.046670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.807 [2024-11-19 13:01:01.060815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.807 [2024-11-19 13:01:01.060834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.807 [2024-11-19 13:01:01.075189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.807 [2024-11-19 13:01:01.075207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.807 [2024-11-19 13:01:01.086343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.807 [2024-11-19 13:01:01.086361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.807 [2024-11-19 13:01:01.100581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.807 [2024-11-19 13:01:01.100599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.807 [2024-11-19 13:01:01.114595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.807 [2024-11-19 13:01:01.114614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.807 [2024-11-19 13:01:01.128049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.807 [2024-11-19 13:01:01.128069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.807 [2024-11-19 13:01:01.141918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.807 [2024-11-19 13:01:01.141937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.807 [2024-11-19 13:01:01.155292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.807 [2024-11-19 13:01:01.155311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.807 [2024-11-19 13:01:01.169440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.807 [2024-11-19 13:01:01.169460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.066 [2024-11-19 13:01:01.183824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.066 [2024-11-19 13:01:01.183844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.066 [2024-11-19 13:01:01.198196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.066 [2024-11-19 13:01:01.198215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.066 [2024-11-19 13:01:01.209055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.066 [2024-11-19 13:01:01.209074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.066 [2024-11-19 13:01:01.218893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.066 [2024-11-19 13:01:01.218911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.066 [2024-11-19 13:01:01.228639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.066 [2024-11-19 13:01:01.228658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.066 [2024-11-19 13:01:01.243393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.066 [2024-11-19 13:01:01.243413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.066 [2024-11-19 13:01:01.254519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.066 [2024-11-19 13:01:01.254538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.066 [2024-11-19 13:01:01.268988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.066 [2024-11-19 13:01:01.269007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.066 [2024-11-19 13:01:01.283102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.066 [2024-11-19 13:01:01.283121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.066 [2024-11-19 13:01:01.296486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.066 [2024-11-19 13:01:01.296505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.066 [2024-11-19 13:01:01.310866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.066 [2024-11-19 13:01:01.310885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.066 [2024-11-19 13:01:01.325426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.066 [2024-11-19 13:01:01.325445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.066 [2024-11-19 13:01:01.340719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.066 [2024-11-19 13:01:01.340742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.066 16489.00 IOPS, 128.82 MiB/s [2024-11-19T12:01:01.443Z] [2024-11-19 13:01:01.355217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.066 [2024-11-19 13:01:01.355237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.066 [2024-11-19 13:01:01.370935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.066 [2024-11-19 13:01:01.370961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.066 [2024-11-19 13:01:01.385228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.066 [2024-11-19 13:01:01.385249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.066 [2024-11-19 13:01:01.399604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.066 [2024-11-19 13:01:01.399623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.066 [2024-11-19 13:01:01.410991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.066 [2024-11-19 13:01:01.411010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.066 [2024-11-19 13:01:01.425466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.066 [2024-11-19 13:01:01.425484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.066 [2024-11-19 13:01:01.439482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.066 [2024-11-19 13:01:01.439501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.325 [2024-11-19 13:01:01.453475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.325 [2024-11-19 13:01:01.453494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.325 [2024-11-19 13:01:01.467864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.325 [2024-11-19 13:01:01.467883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.325 [2024-11-19 13:01:01.482588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.325 [2024-11-19 13:01:01.482607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.325 [2024-11-19 13:01:01.497853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.325 [2024-11-19 13:01:01.497873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-19 13:01:01.511709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-19 13:01:01.511729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-19 13:01:01.525726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-19 13:01:01.525745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-19 13:01:01.539611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-19 13:01:01.539629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-19 13:01:01.550360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-19 13:01:01.550379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-19 13:01:01.564733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-19 13:01:01.564751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-19 13:01:01.578551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-19 13:01:01.578570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-19 13:01:01.592752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-19 13:01:01.592771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-19 13:01:01.606601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-19 13:01:01.606620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-19 13:01:01.620815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-19 13:01:01.620834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-19 13:01:01.634764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-19 13:01:01.634782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-19 13:01:01.648945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-19 13:01:01.648968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-19 13:01:01.662744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-19 13:01:01.662763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-19 13:01:01.676760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-19 13:01:01.676779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.326 [2024-11-19 13:01:01.690893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.326 [2024-11-19 13:01:01.690912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.585 [2024-11-19 13:01:01.705374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.585 [2024-11-19 13:01:01.705393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.585 [2024-11-19 13:01:01.715712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.585 [2024-11-19 13:01:01.715730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.585 [2024-11-19 13:01:01.730259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.585 [2024-11-19 13:01:01.730277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.585 [2024-11-19 13:01:01.744298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.585 [2024-11-19 13:01:01.744317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.585 [2024-11-19 13:01:01.758222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.585 [2024-11-19 13:01:01.758246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.585 [2024-11-19 13:01:01.772311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.585 [2024-11-19 13:01:01.772329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.585 [2024-11-19 13:01:01.783145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.585 [2024-11-19 13:01:01.783163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.585 [2024-11-19 13:01:01.797497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.585 [2024-11-19 13:01:01.797516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.585 [2024-11-19 13:01:01.811420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.585 [2024-11-19 13:01:01.811439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.585 [2024-11-19 13:01:01.825560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.585 [2024-11-19 13:01:01.825582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.585 [2024-11-19 13:01:01.839728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.585 [2024-11-19 13:01:01.839747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.585 [2024-11-19 13:01:01.853729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.585 [2024-11-19 13:01:01.853748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.585 [2024-11-19 13:01:01.867914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.585 [2024-11-19 13:01:01.867933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.585 [2024-11-19 13:01:01.882035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.585 [2024-11-19 13:01:01.882053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.585 [2024-11-19 13:01:01.895511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.585 [2024-11-19 13:01:01.895530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.585 [2024-11-19 13:01:01.909621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.585 [2024-11-19 13:01:01.909639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.585 [2024-11-19 13:01:01.923570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.585 [2024-11-19 13:01:01.923588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.585 [2024-11-19 13:01:01.937449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.585 [2024-11-19 13:01:01.937466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.585 [2024-11-19 13:01:01.951760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.585 [2024-11-19 13:01:01.951779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.844 [2024-11-19 13:01:01.963271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.844 [2024-11-19 13:01:01.963291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.844 [2024-11-19 13:01:01.972969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.844 [2024-11-19 13:01:01.972988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.844 [2024-11-19 13:01:01.987528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.844 [2024-11-19 13:01:01.987548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.844 [2024-11-19 13:01:02.001494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.844 [2024-11-19 13:01:02.001513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.844 [2024-11-19 13:01:02.015422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.845 [2024-11-19 13:01:02.015446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.845 [2024-11-19 13:01:02.029501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.845 [2024-11-19 13:01:02.029519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.845 [2024-11-19 13:01:02.043537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.845 [2024-11-19 13:01:02.043556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.845 [2024-11-19 13:01:02.058078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.845 [2024-11-19 13:01:02.058096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.845 [2024-11-19 13:01:02.073593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.845 [2024-11-19 13:01:02.073611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.845 [2024-11-19 13:01:02.087820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.845 [2024-11-19 13:01:02.087839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.845 [2024-11-19 13:01:02.101836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.845 [2024-11-19 13:01:02.101855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.845 [2024-11-19 13:01:02.110971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.845 [2024-11-19 13:01:02.110989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.845 [2024-11-19 13:01:02.125494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.845 [2024-11-19 13:01:02.125512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.845 [2024-11-19 13:01:02.139566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.845 [2024-11-19 13:01:02.139584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.845 [2024-11-19 13:01:02.150457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.845 [2024-11-19 13:01:02.150476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.845 [2024-11-19 13:01:02.164853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.845 [2024-11-19 13:01:02.164872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.845 [2024-11-19 13:01:02.178907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.845 [2024-11-19 13:01:02.178926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.845 [2024-11-19 13:01:02.188671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.845 [2024-11-19 13:01:02.188690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.845 [2024-11-19 13:01:02.203322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.845 [2024-11-19 13:01:02.203341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.845 [2024-11-19 13:01:02.214369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.845 [2024-11-19 13:01:02.214387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.104 [2024-11-19 13:01:02.228640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.104 [2024-11-19 13:01:02.228659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.104 [2024-11-19 13:01:02.242334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.104 [2024-11-19 13:01:02.242353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.104 [2024-11-19 13:01:02.256828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.104 [2024-11-19 13:01:02.256848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.104 [2024-11-19 13:01:02.270456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.104 [2024-11-19 13:01:02.270480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.104 [2024-11-19 13:01:02.284615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.104 [2024-11-19 13:01:02.284634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.104 [2024-11-19 13:01:02.298776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.104 [2024-11-19 13:01:02.298794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.104 [2024-11-19 13:01:02.312977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.104 [2024-11-19 13:01:02.312996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.104 [2024-11-19 13:01:02.323967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.104 [2024-11-19 13:01:02.323986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.104 [2024-11-19 13:01:02.338795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.104 [2024-11-19 13:01:02.338815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.104 16520.67 IOPS, 129.07 MiB/s [2024-11-19T12:01:02.481Z] [2024-11-19 13:01:02.349525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.104 [2024-11-19 13:01:02.349544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.104 [2024-11-19 13:01:02.363886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.104 [2024-11-19 13:01:02.363905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.104 [2024-11-19 13:01:02.377816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.104 [2024-11-19 13:01:02.377835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.104 [2024-11-19 13:01:02.392072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.104 [2024-11-19 13:01:02.392091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.104 [2024-11-19 13:01:02.406048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.104 [2024-11-19 13:01:02.406067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.104 [2024-11-19 13:01:02.420503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.104 [2024-11-19 13:01:02.420523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.104 [2024-11-19 13:01:02.431135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.104 [2024-11-19 13:01:02.431153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.104 [2024-11-19 13:01:02.445606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.104 [2024-11-19 13:01:02.445624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.104 [2024-11-19 13:01:02.459742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.104 [2024-11-19 13:01:02.459761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.104 [2024-11-19 13:01:02.473918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.104 [2024-11-19 13:01:02.473936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.363 [2024-11-19 13:01:02.485127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.363 [2024-11-19 13:01:02.485145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.364 [2024-11-19 13:01:02.499849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.364 [2024-11-19 13:01:02.499867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.364 [2024-11-19 13:01:02.514804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.364 [2024-11-19 13:01:02.514823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.364 [2024-11-19 13:01:02.529282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.364 [2024-11-19 13:01:02.529301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.364 [2024-11-19 13:01:02.543433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.364 [2024-11-19 13:01:02.543451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.364 [2024-11-19 13:01:02.557270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.364 [2024-11-19 13:01:02.557291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.364 [2024-11-19 13:01:02.570669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.364 [2024-11-19 13:01:02.570688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.364 [2024-11-19 13:01:02.584975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.364 [2024-11-19 13:01:02.584995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.364 [2024-11-19 13:01:02.599176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.364 [2024-11-19 13:01:02.599206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.364 [2024-11-19 13:01:02.609960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.364 [2024-11-19 13:01:02.609996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.364 [2024-11-19 13:01:02.624896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.364 [2024-11-19 13:01:02.624915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.364 [2024-11-19 13:01:02.641355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.364 [2024-11-19 13:01:02.641374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.364 [2024-11-19 13:01:02.652776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.364 [2024-11-19 13:01:02.652795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.364 [2024-11-19 13:01:02.667131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.364 [2024-11-19 13:01:02.667151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.364 [2024-11-19 13:01:02.676247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.364 [2024-11-19 13:01:02.676266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.364 [2024-11-19 13:01:02.691346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.364 [2024-11-19 13:01:02.691365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.364 [2024-11-19 13:01:02.706523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.364 [2024-11-19 13:01:02.706543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.364 [2024-11-19 13:01:02.720763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.364 [2024-11-19 13:01:02.720782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.364 [2024-11-19 13:01:02.734255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.364 [2024-11-19 13:01:02.734275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.623 [2024-11-19 13:01:02.748669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.623 [2024-11-19 13:01:02.748689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.623 [2024-11-19 13:01:02.762381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.623 [2024-11-19 13:01:02.762400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.623 [2024-11-19 13:01:02.776736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.623 [2024-11-19 13:01:02.776755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.623 [2024-11-19 13:01:02.792218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.623 [2024-11-19 13:01:02.792237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.623 [2024-11-19 13:01:02.806629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.623 [2024-11-19 13:01:02.806648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.623 [2024-11-19 13:01:02.820816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.623 [2024-11-19 13:01:02.820835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.623 [2024-11-19 13:01:02.832515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.623 [2024-11-19 13:01:02.832534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.623 [2024-11-19 13:01:02.846807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.623 [2024-11-19 13:01:02.846825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.623 [2024-11-19 13:01:02.860905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.623 [2024-11-19 13:01:02.860924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.623 [2024-11-19 13:01:02.875438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.623 [2024-11-19 13:01:02.875458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.623 [2024-11-19 13:01:02.886623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.623 [2024-11-19 13:01:02.886642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.623 [2024-11-19 13:01:02.901320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.623 [2024-11-19 13:01:02.901340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.623 [2024-11-19 13:01:02.915060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.623 [2024-11-19 13:01:02.915080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.623 [2024-11-19 13:01:02.924100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.623 [2024-11-19 13:01:02.924118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.623 [2024-11-19 13:01:02.938545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.623 [2024-11-19 13:01:02.938564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.623 [2024-11-19 13:01:02.952361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.623 [2024-11-19 13:01:02.952380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.623 [2024-11-19 13:01:02.966909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.623 [2024-11-19 13:01:02.966926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.623 [2024-11-19 13:01:02.982134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.623 [2024-11-19 13:01:02.982153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.623 [2024-11-19 13:01:02.996644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.623 [2024-11-19 13:01:02.996664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.882 [2024-11-19 13:01:03.010226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.882 [2024-11-19 13:01:03.010245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.883 [2024-11-19 13:01:03.024355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.883 [2024-11-19 13:01:03.024374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.883 [2024-11-19 13:01:03.038694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.883 [2024-11-19 13:01:03.038713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.883 [2024-11-19 13:01:03.049751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.883 [2024-11-19 13:01:03.049770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.883 [2024-11-19 13:01:03.064262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.883 [2024-11-19 13:01:03.064280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.883 [2024-11-19 13:01:03.078503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.883 [2024-11-19 13:01:03.078521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.883 [2024-11-19 13:01:03.092642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.883 [2024-11-19 13:01:03.092661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.883 [2024-11-19 13:01:03.106669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.883 [2024-11-19 13:01:03.106688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.883 [2024-11-19 13:01:03.120793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.883 [2024-11-19 13:01:03.120812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.883 [2024-11-19 13:01:03.134772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.883 [2024-11-19 13:01:03.134790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.883 [2024-11-19 13:01:03.148687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.883 [2024-11-19 13:01:03.148705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.883 [2024-11-19 13:01:03.162803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.883 [2024-11-19 13:01:03.162823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.883 [2024-11-19 13:01:03.176861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.883 [2024-11-19 13:01:03.176881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.883 [2024-11-19 13:01:03.191477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.883 [2024-11-19 13:01:03.191496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.883 [2024-11-19 13:01:03.202434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.883 [2024-11-19 13:01:03.202453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.883 [2024-11-19 13:01:03.216735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.883 [2024-11-19 13:01:03.216754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.883 [2024-11-19 13:01:03.230376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.883 [2024-11-19 13:01:03.230395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.883 [2024-11-19 13:01:03.241096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.883 [2024-11-19 13:01:03.241114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.883 [2024-11-19 13:01:03.255714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.883 [2024-11-19 13:01:03.255733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.143 [2024-11-19 13:01:03.269362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.143 [2024-11-19 13:01:03.269381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.143 [2024-11-19 13:01:03.283623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.143 [2024-11-19 13:01:03.283641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.143 [2024-11-19 13:01:03.298384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.143 [2024-11-19 13:01:03.298402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.143 [2024-11-19 13:01:03.313913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.143 [2024-11-19 13:01:03.313932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.143 [2024-11-19 13:01:03.328233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.143 [2024-11-19 13:01:03.328252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.143 [2024-11-19 13:01:03.338908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.143 [2024-11-19 13:01:03.338926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.143 16524.75 IOPS, 129.10 MiB/s [2024-11-19T12:01:03.520Z] [2024-11-19 13:01:03.353695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.143 [2024-11-19 13:01:03.353714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.143 [2024-11-19 13:01:03.367974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.143 [2024-11-19 13:01:03.367993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.143 [2024-11-19 13:01:03.378687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.143 [2024-11-19 13:01:03.378706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.143 [2024-11-19 13:01:03.392936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.143 [2024-11-19 13:01:03.392960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.143 [2024-11-19 13:01:03.406729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.143 [2024-11-19 13:01:03.406748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.143 [2024-11-19 13:01:03.421363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.143 [2024-11-19 13:01:03.421382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.143 [2024-11-19 13:01:03.437372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.143 [2024-11-19 13:01:03.437391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.143 [2024-11-19 13:01:03.451206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.143 [2024-11-19 13:01:03.451224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.143 [2024-11-19 13:01:03.465231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.143 [2024-11-19 13:01:03.465250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.143 [2024-11-19 13:01:03.479154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.143 [2024-11-19 13:01:03.479173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.143 [2024-11-19 13:01:03.493145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.143 [2024-11-19 13:01:03.493164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.143 [2024-11-19 13:01:03.507320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.143 [2024-11-19 13:01:03.507339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.143 [2024-11-19 13:01:03.517890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.143 [2024-11-19 13:01:03.517908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.402 [2024-11-19 13:01:03.532550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.402 [2024-11-19 13:01:03.532569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.402 [2024-11-19 13:01:03.546386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.402 [2024-11-19 13:01:03.546405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.402 [2024-11-19 13:01:03.560551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.402 [2024-11-19 13:01:03.560574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.402 [2024-11-19 13:01:03.575062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.402 [2024-11-19 13:01:03.575080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.402 [2024-11-19 13:01:03.590279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.402 [2024-11-19 13:01:03.590298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.402 [2024-11-19 13:01:03.603863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.402 [2024-11-19 13:01:03.603881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.402 [2024-11-19 13:01:03.618058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.402 [2024-11-19 13:01:03.618077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.402 [2024-11-19 13:01:03.632229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.402 [2024-11-19 13:01:03.632249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.402 [2024-11-19 13:01:03.646377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.402 [2024-11-19 13:01:03.646395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.403 [2024-11-19 13:01:03.657606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.403 [2024-11-19 13:01:03.657624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.403 [2024-11-19 13:01:03.672448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.403 [2024-11-19 13:01:03.672468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.403 [2024-11-19 13:01:03.683345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.403 [2024-11-19 13:01:03.683363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.403 [2024-11-19 13:01:03.697701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.403 [2024-11-19 13:01:03.697720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.403 [2024-11-19 13:01:03.711492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.403 [2024-11-19 13:01:03.711511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.403 [2024-11-19 13:01:03.725869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.403 [2024-11-19 13:01:03.725888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.403 [2024-11-19 13:01:03.740076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.403 [2024-11-19 13:01:03.740094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.403 [2024-11-19 13:01:03.754454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.403 [2024-11-19 13:01:03.754473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.403 [2024-11-19 13:01:03.768279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.403 [2024-11-19 13:01:03.768298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-19 13:01:03.782652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-19 13:01:03.782671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-19 13:01:03.794030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-19 13:01:03.794049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-19 13:01:03.808473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-19 13:01:03.808491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-19 13:01:03.822173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-19 13:01:03.822195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-19 13:01:03.836288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-19 13:01:03.836307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-19 13:01:03.850503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-19 13:01:03.850523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-19 13:01:03.864246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-19 13:01:03.864264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-19 13:01:03.878731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-19 13:01:03.878750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-19 13:01:03.889366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-19 13:01:03.889385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-19 13:01:03.903516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-19 13:01:03.903534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-19 13:01:03.916890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-19 13:01:03.916910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-19 13:01:03.930810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-19 13:01:03.930829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-19 13:01:03.944832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-19 13:01:03.944853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-19 13:01:03.959075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-19 13:01:03.959095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-19 13:01:03.969977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-19 13:01:03.969996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-19 13:01:03.984480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-19 13:01:03.984499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-19 13:01:03.998079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-19 13:01:03.998098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-19 13:01:04.012388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-19 13:01:04.012407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.662 [2024-11-19 13:01:04.026521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.662 [2024-11-19 13:01:04.026541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.921 [2024-11-19 13:01:04.040715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.921 [2024-11-19 13:01:04.040734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.921 [2024-11-19 13:01:04.055238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.921 [2024-11-19 13:01:04.055257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.921 [2024-11-19 13:01:04.065761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.921 [2024-11-19 13:01:04.065781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.921 [2024-11-19 13:01:04.075263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.921 [2024-11-19 13:01:04.075290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.921 [2024-11-19 13:01:04.089613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.921 [2024-11-19 13:01:04.089632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.921 [2024-11-19 13:01:04.103357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.921 [2024-11-19 13:01:04.103376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.921 [2024-11-19 13:01:04.117708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.921 [2024-11-19 13:01:04.117727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.921 [2024-11-19 13:01:04.131718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.921 [2024-11-19 13:01:04.131738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.921 [2024-11-19 13:01:04.146180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.921 [2024-11-19 13:01:04.146199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.921 [2024-11-19 13:01:04.156826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.921 [2024-11-19 13:01:04.156845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.921 [2024-11-19 13:01:04.171166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.921 [2024-11-19 13:01:04.171185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.921 [2024-11-19 13:01:04.184827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.921 [2024-11-19 13:01:04.184847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.921 [2024-11-19 13:01:04.199146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.921 [2024-11-19 13:01:04.199168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.921 [2024-11-19 13:01:04.213131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.921 [2024-11-19 13:01:04.213151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.921 [2024-11-19 13:01:04.227100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.921 [2024-11-19 13:01:04.227120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.921 [2024-11-19 13:01:04.241131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.921 [2024-11-19 13:01:04.241150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.921 [2024-11-19 13:01:04.255108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.921 [2024-11-19 13:01:04.255126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.921 [2024-11-19 13:01:04.269107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.921 [2024-11-19 13:01:04.269126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.921 [2024-11-19 13:01:04.283278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.921 [2024-11-19 13:01:04.283298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.181 [2024-11-19 13:01:04.297435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.181 [2024-11-19 13:01:04.297454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.181 [2024-11-19 13:01:04.311771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.181 [2024-11-19 13:01:04.311790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.181 [2024-11-19 13:01:04.326212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.181 [2024-11-19 13:01:04.326230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.181 [2024-11-19 13:01:04.341854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.181 [2024-11-19 13:01:04.341873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.181 16541.60 IOPS, 129.23 MiB/s [2024-11-19T12:01:04.558Z] [2024-11-19 13:01:04.355076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.181 [2024-11-19 13:01:04.355095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.181 00:09:01.181 Latency(us) 00:09:01.181 [2024-11-19T12:01:04.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.181 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:01.181 Nvme1n1 : 5.01 16544.43 129.25 0.00 0.00 7729.05 3490.50 17666.23 00:09:01.181 [2024-11-19T12:01:04.558Z] =================================================================================================================== 00:09:01.181 [2024-11-19T12:01:04.558Z] Total : 16544.43 129.25 0.00 0.00 7729.05 3490.50 17666.23 00:09:01.181 [2024-11-19 13:01:04.364360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.181 [2024-11-19 13:01:04.364376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.181 [2024-11-19 13:01:04.376389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.181 [2024-11-19 13:01:04.376403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.181 [2024-11-19 13:01:04.388429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.181 [2024-11-19 13:01:04.388445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.181 [2024-11-19 13:01:04.400453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.181 [2024-11-19 13:01:04.400470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.181 [2024-11-19 13:01:04.412484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.181 [2024-11-19 13:01:04.412498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.181 [2024-11-19 13:01:04.424513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.181 [2024-11-19 13:01:04.424526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.181 [2024-11-19 13:01:04.436546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.181 [2024-11-19 13:01:04.436560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.181 [2024-11-19 13:01:04.448579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.181 [2024-11-19 13:01:04.448593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.181 [2024-11-19 13:01:04.460610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.181 [2024-11-19 13:01:04.460623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.181 [2024-11-19 13:01:04.472664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.181 [2024-11-19 13:01:04.472677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.181 [2024-11-19 13:01:04.484674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.182 [2024-11-19 13:01:04.484687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.182 [2024-11-19 13:01:04.496704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.182 [2024-11-19 13:01:04.496717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.182 [2024-11-19 13:01:04.508734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.182 [2024-11-19 13:01:04.508744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2720661) - No such process 00:09:01.182 13:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2720661 00:09:01.182 13:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.182 13:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.182 13:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:01.182 13:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.182 13:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:01.182 13:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.182 13:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:01.182 delay0 00:09:01.182 13:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.182 13:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:01.182 13:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.182 13:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:01.182 13:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.182 13:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:01.441 [2024-11-19 13:01:04.617310] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:08.008 Initializing NVMe Controllers 00:09:08.008 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:08.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:08.008 Initialization complete. Launching workers. 00:09:08.008 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 258 00:09:08.008 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 545, failed to submit 33 00:09:08.008 success 366, unsuccessful 179, failed 0 00:09:08.008 13:01:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:08.008 13:01:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:08.008 13:01:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:08.008 13:01:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:08.008 13:01:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:08.008 13:01:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:08.008 13:01:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:08.008 13:01:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:08.008 rmmod nvme_tcp 00:09:08.008 rmmod nvme_fabrics 00:09:08.008 rmmod nvme_keyring 00:09:08.008 13:01:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:08.008 13:01:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:08.008 13:01:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:08.008 13:01:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2718805 ']' 00:09:08.008 13:01:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2718805 00:09:08.008 13:01:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2718805 ']' 00:09:08.008 13:01:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2718805 00:09:08.008 13:01:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:08.008 13:01:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.008 13:01:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2718805 00:09:08.008 13:01:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:08.008 13:01:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:08.008 13:01:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2718805' 00:09:08.008 killing process with pid 2718805 00:09:08.008 13:01:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2718805 00:09:08.008 13:01:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2718805 00:09:08.008 13:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:08.008 13:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:08.008 13:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:08.008 13:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:08.008 13:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:08.008 13:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:08.008 13:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:08.008 13:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:08.008 13:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:08.008 13:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.008 13:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.008 13:01:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.918 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:09.918 00:09:09.918 real 0m31.437s 00:09:09.918 user 0m42.006s 00:09:09.918 sys 0m11.134s 00:09:09.918 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.918 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.918 ************************************ 00:09:09.918 END TEST nvmf_zcopy 00:09:09.918 ************************************ 00:09:09.918 13:01:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:09.918 13:01:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:09.918 13:01:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.918 13:01:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.918 ************************************ 00:09:09.918 START TEST nvmf_nmic 00:09:09.918 ************************************ 00:09:09.918 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:09.918 * Looking for test storage... 00:09:09.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:09.918 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:09.918 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:09:09.918 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:10.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.178 --rc genhtml_branch_coverage=1 00:09:10.178 --rc genhtml_function_coverage=1 00:09:10.178 --rc genhtml_legend=1 00:09:10.178 --rc geninfo_all_blocks=1 00:09:10.178 --rc geninfo_unexecuted_blocks=1 00:09:10.178 00:09:10.178 ' 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:10.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.178 --rc genhtml_branch_coverage=1 00:09:10.178 --rc genhtml_function_coverage=1 00:09:10.178 --rc genhtml_legend=1 00:09:10.178 --rc geninfo_all_blocks=1 00:09:10.178 --rc geninfo_unexecuted_blocks=1 00:09:10.178 00:09:10.178 ' 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:10.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.178 --rc genhtml_branch_coverage=1 00:09:10.178 --rc genhtml_function_coverage=1 00:09:10.178 --rc genhtml_legend=1 00:09:10.178 --rc geninfo_all_blocks=1 00:09:10.178 --rc geninfo_unexecuted_blocks=1 00:09:10.178 00:09:10.178 ' 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:10.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.178 --rc genhtml_branch_coverage=1 00:09:10.178 --rc genhtml_function_coverage=1 00:09:10.178 --rc genhtml_legend=1 00:09:10.178 --rc geninfo_all_blocks=1 00:09:10.178 --rc geninfo_unexecuted_blocks=1 00:09:10.178 00:09:10.178 ' 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.178 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:10.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:10.179 13:01:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:16.751 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:16.751 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:16.751 Found net devices under 0000:86:00.0: cvl_0_0 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:16.751 Found net devices under 0000:86:00.1: cvl_0_1 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:16.751 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:16.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:16.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:09:16.752 00:09:16.752 --- 10.0.0.2 ping statistics --- 00:09:16.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.752 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:16.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:16.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:09:16.752 00:09:16.752 --- 10.0.0.1 ping statistics --- 00:09:16.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.752 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2726095 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2726095 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2726095 ']' 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.752 13:01:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.752 [2024-11-19 13:01:19.435028] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:09:16.752 [2024-11-19 13:01:19.435073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.752 [2024-11-19 13:01:19.513838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:16.752 [2024-11-19 13:01:19.562194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.752 [2024-11-19 13:01:19.562233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.752 [2024-11-19 13:01:19.562240] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.752 [2024-11-19 13:01:19.562246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.752 [2024-11-19 13:01:19.562251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.752 [2024-11-19 13:01:19.563730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.752 [2024-11-19 13:01:19.563765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.752 [2024-11-19 13:01:19.563875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.752 [2024-11-19 13:01:19.563875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.011 [2024-11-19 13:01:20.307218] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.011 Malloc0 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.011 [2024-11-19 13:01:20.367389] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.011 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:17.011 test case1: single bdev can't be used in multiple subsystems 00:09:17.012 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:17.012 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.012 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.012 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.012 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:17.012 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.012 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.271 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.271 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:17.271 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:17.271 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.271 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.271 [2024-11-19 13:01:20.395290] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:17.271 [2024-11-19 13:01:20.395313] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:17.271 [2024-11-19 13:01:20.395320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.271 request: 00:09:17.271 { 00:09:17.271 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:17.271 "namespace": { 00:09:17.271 "bdev_name": "Malloc0", 00:09:17.271 "no_auto_visible": false 00:09:17.271 }, 00:09:17.271 "method": "nvmf_subsystem_add_ns", 00:09:17.271 "req_id": 1 00:09:17.271 } 00:09:17.271 Got JSON-RPC error response 00:09:17.271 response: 00:09:17.271 { 00:09:17.271 "code": -32602, 00:09:17.271 "message": "Invalid parameters" 00:09:17.271 } 00:09:17.271 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:17.271 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:17.271 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:17.271 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:17.271 Adding namespace failed - expected result. 00:09:17.271 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:17.271 test case2: host connect to nvmf target in multiple paths 00:09:17.271 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:17.271 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.271 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.271 [2024-11-19 13:01:20.407435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:17.271 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.271 13:01:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:18.649 13:01:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:19.586 13:01:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:19.586 13:01:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:19.586 13:01:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:19.586 13:01:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:19.586 13:01:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:21.490 13:01:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:21.490 13:01:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:21.490 13:01:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:21.490 13:01:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:21.490 13:01:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:21.490 13:01:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:21.490 13:01:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:21.490 [global] 00:09:21.490 thread=1 00:09:21.490 invalidate=1 00:09:21.490 rw=write 00:09:21.490 time_based=1 00:09:21.490 runtime=1 00:09:21.490 ioengine=libaio 00:09:21.490 direct=1 00:09:21.490 bs=4096 00:09:21.490 iodepth=1 00:09:21.490 norandommap=0 00:09:21.490 numjobs=1 00:09:21.490 00:09:21.490 verify_dump=1 00:09:21.490 verify_backlog=512 00:09:21.490 verify_state_save=0 00:09:21.490 do_verify=1 00:09:21.490 verify=crc32c-intel 00:09:21.490 [job0] 00:09:21.490 filename=/dev/nvme0n1 00:09:21.490 Could not set queue depth (nvme0n1) 00:09:21.750 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:21.750 fio-3.35 00:09:21.750 Starting 1 thread 00:09:23.127 00:09:23.127 job0: (groupid=0, jobs=1): err= 0: pid=2727264: Tue Nov 19 13:01:26 2024 00:09:23.127 read: IOPS=2631, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec) 00:09:23.127 slat (nsec): min=6727, max=27097, avg=7450.75, stdev=887.55 00:09:23.127 clat (usec): min=156, max=390, avg=207.35, stdev=37.78 00:09:23.127 lat (usec): min=163, max=397, avg=214.80, stdev=37.74 00:09:23.127 clat percentiles (usec): 00:09:23.127 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:09:23.127 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 217], 00:09:23.127 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 265], 00:09:23.127 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 289], 99.95th=[ 388], 00:09:23.127 | 99.99th=[ 392] 00:09:23.127 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:23.127 slat (nsec): min=9225, max=44894, avg=10249.53, stdev=1233.99 00:09:23.127 clat (usec): min=102, max=281, avg=127.27, stdev=10.92 00:09:23.127 lat (usec): min=119, max=326, avg=137.52, stdev=11.17 00:09:23.127 clat percentiles (usec): 00:09:23.127 | 1.00th=[ 115], 5.00th=[ 118], 10.00th=[ 119], 20.00th=[ 121], 00:09:23.127 | 30.00th=[ 123], 40.00th=[ 124], 50.00th=[ 125], 60.00th=[ 127], 00:09:23.127 | 70.00th=[ 129], 80.00th=[ 131], 90.00th=[ 137], 95.00th=[ 153], 00:09:23.127 | 99.00th=[ 169], 99.50th=[ 172], 99.90th=[ 180], 99.95th=[ 182], 00:09:23.127 | 99.99th=[ 281] 00:09:23.127 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:23.127 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:23.127 lat (usec) : 250=88.96%, 500=11.04% 00:09:23.127 cpu : usr=3.00%, sys=5.00%, ctx=5706, majf=0, minf=1 00:09:23.127 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.127 issued rwts: total=2634,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.127 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.127 00:09:23.127 Run status group 0 (all jobs): 00:09:23.127 READ: bw=10.3MiB/s (10.8MB/s), 10.3MiB/s-10.3MiB/s (10.8MB/s-10.8MB/s), io=10.3MiB (10.8MB), run=1001-1001msec 00:09:23.127 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:23.127 00:09:23.127 Disk stats (read/write): 00:09:23.127 nvme0n1: ios=2535/2560, merge=0/0, ticks=514/308, in_queue=822, util=91.38% 00:09:23.127 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:23.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:23.127 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:23.127 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:23.127 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:23.127 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.127 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:23.127 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.127 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:23.127 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:23.127 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:23.127 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:23.127 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:23.127 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:23.127 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:23.128 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:23.128 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:23.387 rmmod nvme_tcp 00:09:23.387 rmmod nvme_fabrics 00:09:23.387 rmmod nvme_keyring 00:09:23.387 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:23.387 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:23.387 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:23.387 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2726095 ']' 00:09:23.387 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2726095 00:09:23.387 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2726095 ']' 00:09:23.387 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2726095 00:09:23.387 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:23.387 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:23.387 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2726095 00:09:23.387 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:23.387 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:23.387 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2726095' 00:09:23.387 killing process with pid 2726095 00:09:23.387 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2726095 00:09:23.387 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2726095 00:09:23.647 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:23.647 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:23.647 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:23.647 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:23.647 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:23.647 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:23.647 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:23.647 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:23.647 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:23.647 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.647 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.647 13:01:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.555 13:01:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:25.555 00:09:25.555 real 0m15.701s 00:09:25.555 user 0m36.001s 00:09:25.555 sys 0m5.428s 00:09:25.555 13:01:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.555 13:01:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.555 ************************************ 00:09:25.555 END TEST nvmf_nmic 00:09:25.555 ************************************ 00:09:25.555 13:01:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:25.555 13:01:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:25.555 13:01:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.555 13:01:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:25.815 ************************************ 00:09:25.815 START TEST nvmf_fio_target 00:09:25.815 ************************************ 00:09:25.815 13:01:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:25.815 * Looking for test storage... 00:09:25.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.815 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:25.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.816 --rc genhtml_branch_coverage=1 00:09:25.816 --rc genhtml_function_coverage=1 00:09:25.816 --rc genhtml_legend=1 00:09:25.816 --rc geninfo_all_blocks=1 00:09:25.816 --rc geninfo_unexecuted_blocks=1 00:09:25.816 00:09:25.816 ' 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:25.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.816 --rc genhtml_branch_coverage=1 00:09:25.816 --rc genhtml_function_coverage=1 00:09:25.816 --rc genhtml_legend=1 00:09:25.816 --rc geninfo_all_blocks=1 00:09:25.816 --rc geninfo_unexecuted_blocks=1 00:09:25.816 00:09:25.816 ' 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:25.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.816 --rc genhtml_branch_coverage=1 00:09:25.816 --rc genhtml_function_coverage=1 00:09:25.816 --rc genhtml_legend=1 00:09:25.816 --rc geninfo_all_blocks=1 00:09:25.816 --rc geninfo_unexecuted_blocks=1 00:09:25.816 00:09:25.816 ' 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:25.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.816 --rc genhtml_branch_coverage=1 00:09:25.816 --rc genhtml_function_coverage=1 00:09:25.816 --rc genhtml_legend=1 00:09:25.816 --rc geninfo_all_blocks=1 00:09:25.816 --rc geninfo_unexecuted_blocks=1 00:09:25.816 00:09:25.816 ' 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:25.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:25.816 13:01:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:32.384 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:32.384 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:32.384 Found net devices under 0000:86:00.0: cvl_0_0 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:32.384 Found net devices under 0000:86:00.1: cvl_0_1 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:32.384 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:32.385 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:32.385 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.385 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:32.385 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:32.385 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:32.385 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:32.385 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:32.385 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:32.385 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:32.385 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.385 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:32.385 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:32.385 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:32.385 13:01:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:32.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:09:32.385 00:09:32.385 --- 10.0.0.2 ping statistics --- 00:09:32.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.385 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:32.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:09:32.385 00:09:32.385 --- 10.0.0.1 ping statistics --- 00:09:32.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.385 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2731118 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2731118 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2731118 ']' 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.385 [2024-11-19 13:01:35.246118] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:09:32.385 [2024-11-19 13:01:35.246171] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.385 [2024-11-19 13:01:35.324670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:32.385 [2024-11-19 13:01:35.367687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.385 [2024-11-19 13:01:35.367725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.385 [2024-11-19 13:01:35.367732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.385 [2024-11-19 13:01:35.367738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.385 [2024-11-19 13:01:35.367745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.385 [2024-11-19 13:01:35.369308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.385 [2024-11-19 13:01:35.369420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.385 [2024-11-19 13:01:35.369549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.385 [2024-11-19 13:01:35.369550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:32.385 [2024-11-19 13:01:35.683490] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.385 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:32.644 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:32.644 13:01:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:32.903 13:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:32.903 13:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.162 13:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:33.162 13:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.420 13:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:33.420 13:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:33.679 13:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.679 13:01:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:33.679 13:01:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.938 13:01:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:33.938 13:01:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.197 13:01:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:34.197 13:01:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:34.455 13:01:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:34.714 13:01:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:34.714 13:01:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:34.714 13:01:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:34.714 13:01:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:34.973 13:01:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.231 [2024-11-19 13:01:38.414231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.231 13:01:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:35.490 13:01:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:35.490 13:01:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:36.864 13:01:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:36.864 13:01:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:36.864 13:01:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:36.864 13:01:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:36.864 13:01:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:36.864 13:01:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:38.766 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:38.766 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:38.766 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:38.766 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:38.766 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:38.766 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:38.766 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:38.766 [global] 00:09:38.766 thread=1 00:09:38.766 invalidate=1 00:09:38.766 rw=write 00:09:38.766 time_based=1 00:09:38.766 runtime=1 00:09:38.766 ioengine=libaio 00:09:38.766 direct=1 00:09:38.766 bs=4096 00:09:38.766 iodepth=1 00:09:38.766 norandommap=0 00:09:38.766 numjobs=1 00:09:38.767 00:09:38.767 verify_dump=1 00:09:38.767 verify_backlog=512 00:09:38.767 verify_state_save=0 00:09:38.767 do_verify=1 00:09:38.767 verify=crc32c-intel 00:09:38.767 [job0] 00:09:38.767 filename=/dev/nvme0n1 00:09:38.767 [job1] 00:09:38.767 filename=/dev/nvme0n2 00:09:38.767 [job2] 00:09:38.767 filename=/dev/nvme0n3 00:09:38.767 [job3] 00:09:38.767 filename=/dev/nvme0n4 00:09:38.767 Could not set queue depth (nvme0n1) 00:09:38.767 Could not set queue depth (nvme0n2) 00:09:38.767 Could not set queue depth (nvme0n3) 00:09:38.767 Could not set queue depth (nvme0n4) 00:09:39.025 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:39.025 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:39.025 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:39.025 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:39.025 fio-3.35 00:09:39.025 Starting 4 threads 00:09:40.402 00:09:40.402 job0: (groupid=0, jobs=1): err= 0: pid=2732474: Tue Nov 19 13:01:43 2024 00:09:40.403 read: IOPS=2095, BW=8384KiB/s (8585kB/s)(8392KiB/1001msec) 00:09:40.403 slat (nsec): min=6625, max=32829, avg=9007.88, stdev=1505.22 00:09:40.403 clat (usec): min=170, max=508, avg=261.06, stdev=40.15 00:09:40.403 lat (usec): min=178, max=518, avg=270.07, stdev=40.62 00:09:40.403 clat percentiles (usec): 00:09:40.403 | 1.00th=[ 212], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 235], 00:09:40.403 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 255], 00:09:40.403 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[ 322], 00:09:40.403 | 99.00th=[ 441], 99.50th=[ 469], 99.90th=[ 490], 99.95th=[ 494], 00:09:40.403 | 99.99th=[ 510] 00:09:40.403 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:40.403 slat (nsec): min=9690, max=42918, avg=11975.87, stdev=1821.52 00:09:40.403 clat (usec): min=110, max=333, avg=152.40, stdev=25.84 00:09:40.403 lat (usec): min=121, max=347, avg=164.38, stdev=25.80 00:09:40.403 clat percentiles (usec): 00:09:40.403 | 1.00th=[ 118], 5.00th=[ 122], 10.00th=[ 125], 20.00th=[ 133], 00:09:40.403 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 151], 00:09:40.403 | 70.00th=[ 161], 80.00th=[ 176], 90.00th=[ 190], 95.00th=[ 202], 00:09:40.403 | 99.00th=[ 229], 99.50th=[ 237], 99.90th=[ 269], 99.95th=[ 281], 00:09:40.403 | 99.99th=[ 334] 00:09:40.403 bw ( KiB/s): min= 9864, max= 9864, per=43.41%, avg=9864.00, stdev= 0.00, samples=1 00:09:40.403 iops : min= 2466, max= 2466, avg=2466.00, stdev= 0.00, samples=1 00:09:40.403 lat (usec) : 250=77.07%, 500=22.91%, 750=0.02% 00:09:40.403 cpu : usr=2.80%, sys=5.00%, ctx=4659, majf=0, minf=1 00:09:40.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:40.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.403 issued rwts: total=2098,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.403 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:40.403 job1: (groupid=0, jobs=1): err= 0: pid=2732475: Tue Nov 19 13:01:43 2024 00:09:40.403 read: IOPS=267, BW=1068KiB/s (1094kB/s)(1096KiB/1026msec) 00:09:40.403 slat (nsec): min=6923, max=34952, avg=8705.30, stdev=3044.61 00:09:40.403 clat (usec): min=199, max=42049, avg=3382.32, stdev=10910.23 00:09:40.403 lat (usec): min=207, max=42062, avg=3391.03, stdev=10912.02 00:09:40.403 clat percentiles (usec): 00:09:40.403 | 1.00th=[ 210], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 233], 00:09:40.403 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:09:40.403 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[41157], 00:09:40.403 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:40.403 | 99.99th=[42206] 00:09:40.403 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:09:40.403 slat (nsec): min=9957, max=66436, avg=12743.56, stdev=5428.08 00:09:40.403 clat (usec): min=130, max=305, avg=171.51, stdev=18.41 00:09:40.403 lat (usec): min=149, max=345, avg=184.25, stdev=20.12 00:09:40.403 clat percentiles (usec): 00:09:40.403 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:09:40.403 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:09:40.403 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 200], 00:09:40.403 | 99.00th=[ 247], 99.50th=[ 285], 99.90th=[ 306], 99.95th=[ 306], 00:09:40.403 | 99.99th=[ 306] 00:09:40.403 bw ( KiB/s): min= 4096, max= 4096, per=18.03%, avg=4096.00, stdev= 0.00, samples=1 00:09:40.403 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:40.403 lat (usec) : 250=86.90%, 500=10.31%, 750=0.13% 00:09:40.403 lat (msec) : 50=2.67% 00:09:40.403 cpu : usr=0.78%, sys=1.17%, ctx=786, majf=0, minf=1 00:09:40.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:40.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.403 issued rwts: total=274,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.403 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:40.403 job2: (groupid=0, jobs=1): err= 0: pid=2732476: Tue Nov 19 13:01:43 2024 00:09:40.403 read: IOPS=25, BW=104KiB/s (106kB/s)(104KiB/1004msec) 00:09:40.403 slat (nsec): min=8144, max=26959, avg=19832.08, stdev=5791.34 00:09:40.403 clat (usec): min=246, max=41973, avg=34352.58, stdev=14974.32 00:09:40.403 lat (usec): min=260, max=41997, avg=34372.41, stdev=14975.12 00:09:40.403 clat percentiles (usec): 00:09:40.403 | 1.00th=[ 247], 5.00th=[ 247], 10.00th=[ 347], 20.00th=[40633], 00:09:40.403 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:40.403 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:09:40.403 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:40.403 | 99.99th=[42206] 00:09:40.403 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:09:40.403 slat (nsec): min=9673, max=40568, avg=11371.03, stdev=2167.54 00:09:40.403 clat (usec): min=148, max=419, avg=200.33, stdev=25.58 00:09:40.403 lat (usec): min=160, max=459, avg=211.70, stdev=26.10 00:09:40.403 clat percentiles (usec): 00:09:40.403 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 172], 20.00th=[ 182], 00:09:40.403 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:09:40.403 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 241], 95.00th=[ 243], 00:09:40.403 | 99.00th=[ 258], 99.50th=[ 273], 99.90th=[ 420], 99.95th=[ 420], 00:09:40.403 | 99.99th=[ 420] 00:09:40.403 bw ( KiB/s): min= 4096, max= 4096, per=18.03%, avg=4096.00, stdev= 0.00, samples=1 00:09:40.403 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:40.403 lat (usec) : 250=94.24%, 500=1.49%, 750=0.19% 00:09:40.403 lat (msec) : 50=4.09% 00:09:40.403 cpu : usr=0.50%, sys=0.40%, ctx=538, majf=0, minf=1 00:09:40.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:40.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.403 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.403 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:40.403 job3: (groupid=0, jobs=1): err= 0: pid=2732477: Tue Nov 19 13:01:43 2024 00:09:40.403 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:40.403 slat (nsec): min=7204, max=25044, avg=8433.55, stdev=1232.33 00:09:40.403 clat (usec): min=179, max=527, avg=279.44, stdev=60.58 00:09:40.403 lat (usec): min=187, max=535, avg=287.87, stdev=60.63 00:09:40.403 clat percentiles (usec): 00:09:40.403 | 1.00th=[ 215], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 241], 00:09:40.403 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 262], 60.00th=[ 273], 00:09:40.403 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 392], 95.00th=[ 437], 00:09:40.403 | 99.00th=[ 474], 99.50th=[ 494], 99.90th=[ 529], 99.95th=[ 529], 00:09:40.403 | 99.99th=[ 529] 00:09:40.403 write: IOPS=2241, BW=8967KiB/s (9182kB/s)(8976KiB/1001msec); 0 zone resets 00:09:40.403 slat (nsec): min=10749, max=42780, avg=12194.57, stdev=2101.15 00:09:40.403 clat (usec): min=122, max=363, avg=164.80, stdev=31.17 00:09:40.403 lat (usec): min=133, max=374, avg=176.99, stdev=31.52 00:09:40.403 clat percentiles (usec): 00:09:40.403 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:09:40.403 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 159], 00:09:40.403 | 70.00th=[ 176], 80.00th=[ 194], 90.00th=[ 210], 95.00th=[ 231], 00:09:40.403 | 99.00th=[ 253], 99.50th=[ 269], 99.90th=[ 293], 99.95th=[ 338], 00:09:40.403 | 99.99th=[ 363] 00:09:40.403 bw ( KiB/s): min= 8192, max= 8192, per=36.05%, avg=8192.00, stdev= 0.00, samples=1 00:09:40.403 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:40.403 lat (usec) : 250=68.29%, 500=31.55%, 750=0.16% 00:09:40.403 cpu : usr=3.90%, sys=6.70%, ctx=4293, majf=0, minf=1 00:09:40.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:40.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.404 issued rwts: total=2048,2244,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.404 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:40.404 00:09:40.404 Run status group 0 (all jobs): 00:09:40.404 READ: bw=16.9MiB/s (17.7MB/s), 104KiB/s-8384KiB/s (106kB/s-8585kB/s), io=17.4MiB (18.2MB), run=1001-1026msec 00:09:40.404 WRITE: bw=22.2MiB/s (23.3MB/s), 1996KiB/s-9.99MiB/s (2044kB/s-10.5MB/s), io=22.8MiB (23.9MB), run=1001-1026msec 00:09:40.404 00:09:40.404 Disk stats (read/write): 00:09:40.404 nvme0n1: ios=1680/2048, merge=0/0, ticks=1416/306, in_queue=1722, util=97.49% 00:09:40.404 nvme0n2: ios=268/512, merge=0/0, ticks=678/83, in_queue=761, util=82.92% 00:09:40.404 nvme0n3: ios=21/512, merge=0/0, ticks=688/99, in_queue=787, util=87.61% 00:09:40.404 nvme0n4: ios=1558/1897, merge=0/0, ticks=1330/289, in_queue=1619, util=97.79% 00:09:40.404 13:01:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:40.404 [global] 00:09:40.404 thread=1 00:09:40.404 invalidate=1 00:09:40.404 rw=randwrite 00:09:40.404 time_based=1 00:09:40.404 runtime=1 00:09:40.404 ioengine=libaio 00:09:40.404 direct=1 00:09:40.404 bs=4096 00:09:40.404 iodepth=1 00:09:40.404 norandommap=0 00:09:40.404 numjobs=1 00:09:40.404 00:09:40.404 verify_dump=1 00:09:40.404 verify_backlog=512 00:09:40.404 verify_state_save=0 00:09:40.404 do_verify=1 00:09:40.404 verify=crc32c-intel 00:09:40.404 [job0] 00:09:40.404 filename=/dev/nvme0n1 00:09:40.404 [job1] 00:09:40.404 filename=/dev/nvme0n2 00:09:40.404 [job2] 00:09:40.404 filename=/dev/nvme0n3 00:09:40.404 [job3] 00:09:40.404 filename=/dev/nvme0n4 00:09:40.404 Could not set queue depth (nvme0n1) 00:09:40.404 Could not set queue depth (nvme0n2) 00:09:40.404 Could not set queue depth (nvme0n3) 00:09:40.404 Could not set queue depth (nvme0n4) 00:09:40.662 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.662 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.662 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.662 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.662 fio-3.35 00:09:40.662 Starting 4 threads 00:09:42.039 00:09:42.039 job0: (groupid=0, jobs=1): err= 0: pid=2732843: Tue Nov 19 13:01:45 2024 00:09:42.039 read: IOPS=22, BW=89.0KiB/s (91.1kB/s)(92.0KiB/1034msec) 00:09:42.039 slat (nsec): min=7760, max=18335, avg=12415.09, stdev=4262.57 00:09:42.039 clat (usec): min=40820, max=42001, avg=41045.65, stdev=239.20 00:09:42.039 lat (usec): min=40837, max=42009, avg=41058.06, stdev=238.16 00:09:42.039 clat percentiles (usec): 00:09:42.039 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:42.039 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:42.039 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:09:42.039 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:42.039 | 99.99th=[42206] 00:09:42.039 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:09:42.039 slat (nsec): min=9107, max=40623, avg=10417.47, stdev=2390.68 00:09:42.039 clat (usec): min=136, max=371, avg=162.66, stdev=16.76 00:09:42.039 lat (usec): min=145, max=410, avg=173.07, stdev=17.61 00:09:42.039 clat percentiles (usec): 00:09:42.039 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 151], 00:09:42.039 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 163], 00:09:42.039 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 180], 95.00th=[ 186], 00:09:42.039 | 99.00th=[ 210], 99.50th=[ 243], 99.90th=[ 371], 99.95th=[ 371], 00:09:42.039 | 99.99th=[ 371] 00:09:42.039 bw ( KiB/s): min= 4096, max= 4096, per=17.23%, avg=4096.00, stdev= 0.00, samples=1 00:09:42.039 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:42.039 lat (usec) : 250=95.33%, 500=0.37% 00:09:42.039 lat (msec) : 50=4.30% 00:09:42.039 cpu : usr=0.48%, sys=0.29%, ctx=535, majf=0, minf=1 00:09:42.039 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.039 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.039 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.039 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.039 job1: (groupid=0, jobs=1): err= 0: pid=2732844: Tue Nov 19 13:01:45 2024 00:09:42.039 read: IOPS=2545, BW=9.94MiB/s (10.4MB/s)(9.95MiB/1001msec) 00:09:42.039 slat (nsec): min=6151, max=27615, avg=7007.37, stdev=961.77 00:09:42.039 clat (usec): min=179, max=444, avg=223.00, stdev=19.91 00:09:42.039 lat (usec): min=185, max=451, avg=230.01, stdev=19.91 00:09:42.039 clat percentiles (usec): 00:09:42.039 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:09:42.039 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:09:42.039 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 260], 00:09:42.039 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 306], 99.95th=[ 383], 00:09:42.039 | 99.99th=[ 445] 00:09:42.039 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:42.039 slat (nsec): min=8771, max=35211, avg=9783.10, stdev=1094.63 00:09:42.039 clat (usec): min=114, max=280, avg=147.93, stdev=14.16 00:09:42.039 lat (usec): min=124, max=315, avg=157.71, stdev=14.29 00:09:42.039 clat percentiles (usec): 00:09:42.039 | 1.00th=[ 123], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 137], 00:09:42.039 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 151], 00:09:42.039 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 167], 95.00th=[ 174], 00:09:42.039 | 99.00th=[ 188], 99.50th=[ 194], 99.90th=[ 227], 99.95th=[ 245], 00:09:42.039 | 99.99th=[ 281] 00:09:42.039 bw ( KiB/s): min=12288, max=12288, per=51.70%, avg=12288.00, stdev= 0.00, samples=1 00:09:42.039 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:42.039 lat (usec) : 250=94.69%, 500=5.31% 00:09:42.039 cpu : usr=1.90%, sys=4.90%, ctx=5108, majf=0, minf=1 00:09:42.039 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.039 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.039 issued rwts: total=2548,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.039 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.039 job2: (groupid=0, jobs=1): err= 0: pid=2732845: Tue Nov 19 13:01:45 2024 00:09:42.039 read: IOPS=2311, BW=9247KiB/s (9469kB/s)(9256KiB/1001msec) 00:09:42.039 slat (nsec): min=6515, max=27596, avg=7456.20, stdev=811.94 00:09:42.039 clat (usec): min=182, max=1081, avg=219.05, stdev=29.75 00:09:42.039 lat (usec): min=189, max=1088, avg=226.51, stdev=29.78 00:09:42.039 clat percentiles (usec): 00:09:42.039 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 202], 00:09:42.039 | 30.00th=[ 206], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:09:42.039 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 258], 95.00th=[ 277], 00:09:42.039 | 99.00th=[ 289], 99.50th=[ 289], 99.90th=[ 310], 99.95th=[ 433], 00:09:42.039 | 99.99th=[ 1074] 00:09:42.039 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:42.039 slat (nsec): min=9112, max=35366, avg=10114.59, stdev=1058.97 00:09:42.039 clat (usec): min=124, max=388, avg=172.04, stdev=27.55 00:09:42.039 lat (usec): min=134, max=423, avg=182.16, stdev=27.63 00:09:42.039 clat percentiles (usec): 00:09:42.039 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:09:42.039 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 176], 00:09:42.039 | 70.00th=[ 188], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 217], 00:09:42.039 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 285], 99.95th=[ 347], 00:09:42.039 | 99.99th=[ 388] 00:09:42.039 bw ( KiB/s): min=10424, max=10424, per=43.86%, avg=10424.00, stdev= 0.00, samples=1 00:09:42.039 iops : min= 2606, max= 2606, avg=2606.00, stdev= 0.00, samples=1 00:09:42.039 lat (usec) : 250=93.09%, 500=6.89% 00:09:42.039 lat (msec) : 2=0.02% 00:09:42.039 cpu : usr=2.40%, sys=4.40%, ctx=4874, majf=0, minf=1 00:09:42.039 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.039 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.040 issued rwts: total=2314,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.040 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.040 job3: (groupid=0, jobs=1): err= 0: pid=2732846: Tue Nov 19 13:01:45 2024 00:09:42.040 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:09:42.040 slat (nsec): min=9687, max=24470, avg=22961.50, stdev=2995.35 00:09:42.040 clat (usec): min=40740, max=41620, avg=40992.68, stdev=168.28 00:09:42.040 lat (usec): min=40763, max=41629, avg=41015.64, stdev=165.90 00:09:42.040 clat percentiles (usec): 00:09:42.040 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:42.040 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:42.040 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:42.040 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:42.040 | 99.99th=[41681] 00:09:42.040 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:09:42.040 slat (usec): min=9, max=166, avg=13.63, stdev= 8.09 00:09:42.040 clat (usec): min=91, max=339, avg=191.43, stdev=20.87 00:09:42.040 lat (usec): min=147, max=367, avg=205.05, stdev=21.40 00:09:42.040 clat percentiles (usec): 00:09:42.040 | 1.00th=[ 151], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 176], 00:09:42.040 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 196], 00:09:42.040 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 217], 95.00th=[ 223], 00:09:42.040 | 99.00th=[ 233], 99.50th=[ 281], 99.90th=[ 338], 99.95th=[ 338], 00:09:42.040 | 99.99th=[ 338] 00:09:42.040 bw ( KiB/s): min= 4096, max= 4096, per=17.23%, avg=4096.00, stdev= 0.00, samples=1 00:09:42.040 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:42.040 lat (usec) : 100=0.19%, 250=94.94%, 500=0.75% 00:09:42.040 lat (msec) : 50=4.12% 00:09:42.040 cpu : usr=0.30%, sys=0.60%, ctx=535, majf=0, minf=1 00:09:42.040 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.040 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.040 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.040 00:09:42.040 Run status group 0 (all jobs): 00:09:42.040 READ: bw=18.5MiB/s (19.4MB/s), 87.2KiB/s-9.94MiB/s (89.3kB/s-10.4MB/s), io=19.2MiB (20.1MB), run=1001-1034msec 00:09:42.040 WRITE: bw=23.2MiB/s (24.3MB/s), 1981KiB/s-9.99MiB/s (2028kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1034msec 00:09:42.040 00:09:42.040 Disk stats (read/write): 00:09:42.040 nvme0n1: ios=68/512, merge=0/0, ticks=762/78, in_queue=840, util=87.07% 00:09:42.040 nvme0n2: ios=2073/2360, merge=0/0, ticks=542/336, in_queue=878, util=93.30% 00:09:42.040 nvme0n3: ios=2048/2075, merge=0/0, ticks=440/358, in_queue=798, util=89.07% 00:09:42.040 nvme0n4: ios=76/512, merge=0/0, ticks=1715/98, in_queue=1813, util=98.53% 00:09:42.040 13:01:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:42.040 [global] 00:09:42.040 thread=1 00:09:42.040 invalidate=1 00:09:42.040 rw=write 00:09:42.040 time_based=1 00:09:42.040 runtime=1 00:09:42.040 ioengine=libaio 00:09:42.040 direct=1 00:09:42.040 bs=4096 00:09:42.040 iodepth=128 00:09:42.040 norandommap=0 00:09:42.040 numjobs=1 00:09:42.040 00:09:42.040 verify_dump=1 00:09:42.040 verify_backlog=512 00:09:42.040 verify_state_save=0 00:09:42.040 do_verify=1 00:09:42.040 verify=crc32c-intel 00:09:42.040 [job0] 00:09:42.040 filename=/dev/nvme0n1 00:09:42.040 [job1] 00:09:42.040 filename=/dev/nvme0n2 00:09:42.040 [job2] 00:09:42.040 filename=/dev/nvme0n3 00:09:42.040 [job3] 00:09:42.040 filename=/dev/nvme0n4 00:09:42.040 Could not set queue depth (nvme0n1) 00:09:42.040 Could not set queue depth (nvme0n2) 00:09:42.040 Could not set queue depth (nvme0n3) 00:09:42.040 Could not set queue depth (nvme0n4) 00:09:42.311 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:42.311 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:42.311 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:42.311 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:42.311 fio-3.35 00:09:42.311 Starting 4 threads 00:09:43.688 00:09:43.688 job0: (groupid=0, jobs=1): err= 0: pid=2733227: Tue Nov 19 13:01:46 2024 00:09:43.688 read: IOPS=4703, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1005msec) 00:09:43.688 slat (nsec): min=1516, max=12994k, avg=94983.45, stdev=605465.17 00:09:43.688 clat (usec): min=1744, max=39140, avg=12111.73, stdev=4211.57 00:09:43.688 lat (usec): min=5053, max=39164, avg=12206.71, stdev=4264.48 00:09:43.688 clat percentiles (usec): 00:09:43.688 | 1.00th=[ 5407], 5.00th=[ 8356], 10.00th=[ 9503], 20.00th=[10028], 00:09:43.688 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11076], 60.00th=[11469], 00:09:43.688 | 70.00th=[11863], 80.00th=[12649], 90.00th=[15533], 95.00th=[24249], 00:09:43.688 | 99.00th=[27132], 99.50th=[30540], 99.90th=[33162], 99.95th=[34341], 00:09:43.688 | 99.99th=[39060] 00:09:43.688 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:09:43.688 slat (usec): min=2, max=6162, avg=102.22, stdev=485.85 00:09:43.688 clat (usec): min=6405, max=49734, avg=13649.24, stdev=6409.73 00:09:43.688 lat (usec): min=6415, max=49742, avg=13751.46, stdev=6456.78 00:09:43.688 clat percentiles (usec): 00:09:43.688 | 1.00th=[ 7308], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10421], 00:09:43.688 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11207], 60.00th=[11469], 00:09:43.688 | 70.00th=[12387], 80.00th=[15270], 90.00th=[21627], 95.00th=[24773], 00:09:43.688 | 99.00th=[44303], 99.50th=[46400], 99.90th=[49546], 99.95th=[49546], 00:09:43.688 | 99.99th=[49546] 00:09:43.688 bw ( KiB/s): min=18032, max=22856, per=27.60%, avg=20444.00, stdev=3411.08, samples=2 00:09:43.688 iops : min= 4508, max= 5714, avg=5111.00, stdev=852.77, samples=2 00:09:43.688 lat (msec) : 2=0.01%, 10=15.40%, 20=72.93%, 50=11.67% 00:09:43.688 cpu : usr=3.29%, sys=6.77%, ctx=534, majf=0, minf=2 00:09:43.688 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:43.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:43.688 issued rwts: total=4727,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.688 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:43.688 job1: (groupid=0, jobs=1): err= 0: pid=2733228: Tue Nov 19 13:01:46 2024 00:09:43.688 read: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec) 00:09:43.688 slat (nsec): min=1367, max=10663k, avg=90647.87, stdev=666559.41 00:09:43.688 clat (usec): min=3694, max=23528, avg=11388.91, stdev=2699.81 00:09:43.688 lat (usec): min=3701, max=32475, avg=11479.56, stdev=2752.64 00:09:43.688 clat percentiles (usec): 00:09:43.688 | 1.00th=[ 4817], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9765], 00:09:43.688 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[11207], 00:09:43.688 | 70.00th=[11994], 80.00th=[13173], 90.00th=[15008], 95.00th=[17171], 00:09:43.688 | 99.00th=[20055], 99.50th=[21627], 99.90th=[22414], 99.95th=[22676], 00:09:43.688 | 99.99th=[23462] 00:09:43.688 write: IOPS=5904, BW=23.1MiB/s (24.2MB/s)(23.3MiB/1009msec); 0 zone resets 00:09:43.688 slat (usec): min=2, max=9084, avg=75.91, stdev=387.71 00:09:43.688 clat (usec): min=1440, max=32392, avg=10626.78, stdev=4200.76 00:09:43.688 lat (usec): min=1451, max=32407, avg=10702.69, stdev=4235.79 00:09:43.688 clat percentiles (usec): 00:09:43.688 | 1.00th=[ 3392], 5.00th=[ 4883], 10.00th=[ 6521], 20.00th=[ 9110], 00:09:43.688 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:09:43.688 | 70.00th=[10814], 80.00th=[11469], 90.00th=[12125], 95.00th=[16188], 00:09:43.689 | 99.00th=[32113], 99.50th=[32113], 99.90th=[32375], 99.95th=[32375], 00:09:43.689 | 99.99th=[32375] 00:09:43.689 bw ( KiB/s): min=22072, max=24576, per=31.49%, avg=23324.00, stdev=1770.60, samples=2 00:09:43.689 iops : min= 5518, max= 6144, avg=5831.00, stdev=442.65, samples=2 00:09:43.689 lat (msec) : 2=0.07%, 4=1.06%, 10=29.14%, 20=67.31%, 50=2.42% 00:09:43.689 cpu : usr=4.37%, sys=7.14%, ctx=650, majf=0, minf=1 00:09:43.689 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:43.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:43.689 issued rwts: total=5632,5958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.689 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:43.689 job2: (groupid=0, jobs=1): err= 0: pid=2733229: Tue Nov 19 13:01:46 2024 00:09:43.689 read: IOPS=4504, BW=17.6MiB/s (18.5MB/s)(17.7MiB/1008msec) 00:09:43.689 slat (nsec): min=1400, max=12301k, avg=110775.90, stdev=782503.51 00:09:43.689 clat (usec): min=3827, max=33763, avg=13525.35, stdev=3903.26 00:09:43.689 lat (usec): min=3836, max=33766, avg=13636.13, stdev=3952.24 00:09:43.689 clat percentiles (usec): 00:09:43.689 | 1.00th=[ 5538], 5.00th=[10290], 10.00th=[10683], 20.00th=[11076], 00:09:43.689 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11994], 60.00th=[13173], 00:09:43.689 | 70.00th=[14222], 80.00th=[16450], 90.00th=[19006], 95.00th=[20841], 00:09:43.689 | 99.00th=[26608], 99.50th=[29754], 99.90th=[33817], 99.95th=[33817], 00:09:43.689 | 99.99th=[33817] 00:09:43.689 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:09:43.689 slat (usec): min=2, max=11159, avg=102.29, stdev=514.69 00:09:43.689 clat (usec): min=2623, max=59585, avg=14278.02, stdev=8463.80 00:09:43.689 lat (usec): min=2634, max=59591, avg=14380.31, stdev=8517.00 00:09:43.689 clat percentiles (usec): 00:09:43.689 | 1.00th=[ 3654], 5.00th=[ 5473], 10.00th=[ 7963], 20.00th=[10028], 00:09:43.689 | 30.00th=[11076], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:09:43.689 | 70.00th=[13435], 80.00th=[14222], 90.00th=[23725], 95.00th=[28181], 00:09:43.689 | 99.00th=[54789], 99.50th=[56886], 99.90th=[59507], 99.95th=[59507], 00:09:43.689 | 99.99th=[59507] 00:09:43.689 bw ( KiB/s): min=16400, max=20464, per=24.88%, avg=18432.00, stdev=2873.68, samples=2 00:09:43.689 iops : min= 4100, max= 5116, avg=4608.00, stdev=718.42, samples=2 00:09:43.689 lat (msec) : 4=0.89%, 10=11.01%, 20=74.45%, 50=12.72%, 100=0.94% 00:09:43.689 cpu : usr=2.88%, sys=5.96%, ctx=531, majf=0, minf=1 00:09:43.689 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:43.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:43.689 issued rwts: total=4541,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.689 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:43.689 job3: (groupid=0, jobs=1): err= 0: pid=2733230: Tue Nov 19 13:01:46 2024 00:09:43.689 read: IOPS=2612, BW=10.2MiB/s (10.7MB/s)(10.3MiB/1013msec) 00:09:43.689 slat (nsec): min=1153, max=21397k, avg=158134.76, stdev=994828.12 00:09:43.689 clat (usec): min=6308, max=75342, avg=19403.14, stdev=12998.11 00:09:43.689 lat (usec): min=6316, max=75351, avg=19561.27, stdev=13077.91 00:09:43.689 clat percentiles (usec): 00:09:43.689 | 1.00th=[10028], 5.00th=[10814], 10.00th=[11076], 20.00th=[13173], 00:09:43.689 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14746], 00:09:43.689 | 70.00th=[17433], 80.00th=[20579], 90.00th=[39584], 95.00th=[55313], 00:09:43.689 | 99.00th=[65274], 99.50th=[71828], 99.90th=[74974], 99.95th=[74974], 00:09:43.689 | 99.99th=[74974] 00:09:43.689 write: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1013msec); 0 zone resets 00:09:43.689 slat (nsec): min=1935, max=40910k, avg=184031.60, stdev=1395755.10 00:09:43.689 clat (msec): min=5, max=137, avg=20.84, stdev=12.29 00:09:43.689 lat (msec): min=5, max=137, avg=21.02, stdev=12.50 00:09:43.689 clat percentiles (msec): 00:09:43.689 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 13], 20.00th=[ 14], 00:09:43.689 | 30.00th=[ 14], 40.00th=[ 17], 50.00th=[ 17], 60.00th=[ 20], 00:09:43.689 | 70.00th=[ 22], 80.00th=[ 24], 90.00th=[ 35], 95.00th=[ 52], 00:09:43.689 | 99.00th=[ 63], 99.50th=[ 66], 99.90th=[ 116], 99.95th=[ 138], 00:09:43.689 | 99.99th=[ 138] 00:09:43.689 bw ( KiB/s): min= 8192, max=16048, per=16.36%, avg=12120.00, stdev=5555.03, samples=2 00:09:43.689 iops : min= 2048, max= 4012, avg=3030.00, stdev=1388.76, samples=2 00:09:43.689 lat (msec) : 10=1.15%, 20=67.17%, 50=25.80%, 100=5.75%, 250=0.12% 00:09:43.689 cpu : usr=1.68%, sys=3.16%, ctx=292, majf=0, minf=1 00:09:43.689 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:43.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:43.689 issued rwts: total=2646,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.689 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:43.689 00:09:43.689 Run status group 0 (all jobs): 00:09:43.689 READ: bw=67.7MiB/s (70.9MB/s), 10.2MiB/s-21.8MiB/s (10.7MB/s-22.9MB/s), io=68.5MiB (71.9MB), run=1005-1013msec 00:09:43.689 WRITE: bw=72.3MiB/s (75.8MB/s), 11.8MiB/s-23.1MiB/s (12.4MB/s-24.2MB/s), io=73.3MiB (76.8MB), run=1005-1013msec 00:09:43.689 00:09:43.689 Disk stats (read/write): 00:09:43.689 nvme0n1: ios=3926/4096, merge=0/0, ticks=20158/23533, in_queue=43691, util=86.97% 00:09:43.689 nvme0n2: ios=4658/5055, merge=0/0, ticks=51091/50462, in_queue=101553, util=98.58% 00:09:43.689 nvme0n3: ios=4153/4159, merge=0/0, ticks=53223/49815, in_queue=103038, util=98.65% 00:09:43.689 nvme0n4: ios=2255/2560, merge=0/0, ticks=16172/15267, in_queue=31439, util=98.12% 00:09:43.689 13:01:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:43.689 [global] 00:09:43.689 thread=1 00:09:43.689 invalidate=1 00:09:43.689 rw=randwrite 00:09:43.689 time_based=1 00:09:43.689 runtime=1 00:09:43.689 ioengine=libaio 00:09:43.689 direct=1 00:09:43.689 bs=4096 00:09:43.689 iodepth=128 00:09:43.689 norandommap=0 00:09:43.689 numjobs=1 00:09:43.689 00:09:43.689 verify_dump=1 00:09:43.689 verify_backlog=512 00:09:43.689 verify_state_save=0 00:09:43.689 do_verify=1 00:09:43.689 verify=crc32c-intel 00:09:43.689 [job0] 00:09:43.689 filename=/dev/nvme0n1 00:09:43.689 [job1] 00:09:43.689 filename=/dev/nvme0n2 00:09:43.689 [job2] 00:09:43.689 filename=/dev/nvme0n3 00:09:43.689 [job3] 00:09:43.689 filename=/dev/nvme0n4 00:09:43.689 Could not set queue depth (nvme0n1) 00:09:43.689 Could not set queue depth (nvme0n2) 00:09:43.689 Could not set queue depth (nvme0n3) 00:09:43.689 Could not set queue depth (nvme0n4) 00:09:43.689 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.689 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.689 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.689 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.689 fio-3.35 00:09:43.689 Starting 4 threads 00:09:45.215 00:09:45.215 job0: (groupid=0, jobs=1): err= 0: pid=2733596: Tue Nov 19 13:01:48 2024 00:09:45.215 read: IOPS=6089, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1009msec) 00:09:45.215 slat (nsec): min=1336, max=9585.8k, avg=90310.21, stdev=646698.65 00:09:45.215 clat (usec): min=3714, max=19734, avg=11032.09, stdev=2641.43 00:09:45.215 lat (usec): min=3720, max=24022, avg=11122.40, stdev=2686.34 00:09:45.215 clat percentiles (usec): 00:09:45.215 | 1.00th=[ 4490], 5.00th=[ 7963], 10.00th=[ 8586], 20.00th=[ 9634], 00:09:45.215 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10683], 00:09:45.215 | 70.00th=[10945], 80.00th=[12780], 90.00th=[15401], 95.00th=[16909], 00:09:45.215 | 99.00th=[18744], 99.50th=[19006], 99.90th=[19530], 99.95th=[19792], 00:09:45.215 | 99.99th=[19792] 00:09:45.215 write: IOPS=6245, BW=24.4MiB/s (25.6MB/s)(24.6MiB/1009msec); 0 zone resets 00:09:45.215 slat (usec): min=2, max=9785, avg=64.15, stdev=318.21 00:09:45.215 clat (usec): min=721, max=20467, avg=9532.45, stdev=2329.10 00:09:45.215 lat (usec): min=750, max=20480, avg=9596.60, stdev=2353.16 00:09:45.215 clat percentiles (usec): 00:09:45.215 | 1.00th=[ 3228], 5.00th=[ 4621], 10.00th=[ 5932], 20.00th=[ 8094], 00:09:45.215 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:09:45.215 | 70.00th=[10552], 80.00th=[10683], 90.00th=[11076], 95.00th=[11600], 00:09:45.215 | 99.00th=[16712], 99.50th=[18482], 99.90th=[19530], 99.95th=[19792], 00:09:45.215 | 99.99th=[20579] 00:09:45.215 bw ( KiB/s): min=24328, max=25072, per=33.76%, avg=24700.00, stdev=526.09, samples=2 00:09:45.215 iops : min= 6082, max= 6268, avg=6175.00, stdev=131.52, samples=2 00:09:45.215 lat (usec) : 750=0.06% 00:09:45.215 lat (msec) : 2=0.11%, 4=1.58%, 10=33.90%, 20=64.34%, 50=0.01% 00:09:45.215 cpu : usr=4.56%, sys=6.55%, ctx=741, majf=0, minf=1 00:09:45.215 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:45.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.215 issued rwts: total=6144,6302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.215 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.215 job1: (groupid=0, jobs=1): err= 0: pid=2733597: Tue Nov 19 13:01:48 2024 00:09:45.215 read: IOPS=4971, BW=19.4MiB/s (20.4MB/s)(19.5MiB/1003msec) 00:09:45.216 slat (nsec): min=1415, max=30757k, avg=119004.55, stdev=982837.51 00:09:45.216 clat (usec): min=1683, max=97156, avg=13944.00, stdev=10835.81 00:09:45.216 lat (usec): min=3533, max=97166, avg=14063.01, stdev=10938.76 00:09:45.216 clat percentiles (usec): 00:09:45.216 | 1.00th=[ 4178], 5.00th=[ 8291], 10.00th=[ 9503], 20.00th=[ 9765], 00:09:45.216 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10552], 60.00th=[11338], 00:09:45.216 | 70.00th=[11863], 80.00th=[15533], 90.00th=[22414], 95.00th=[31589], 00:09:45.216 | 99.00th=[71828], 99.50th=[85459], 99.90th=[96994], 99.95th=[96994], 00:09:45.216 | 99.99th=[96994] 00:09:45.216 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:09:45.216 slat (usec): min=2, max=9781, avg=73.88, stdev=329.16 00:09:45.216 clat (usec): min=2349, max=97118, avg=11226.98, stdev=6847.09 00:09:45.216 lat (usec): min=2360, max=97121, avg=11300.86, stdev=6873.61 00:09:45.216 clat percentiles (usec): 00:09:45.216 | 1.00th=[ 2802], 5.00th=[ 4883], 10.00th=[ 6718], 20.00th=[ 9503], 00:09:45.216 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:09:45.216 | 70.00th=[10552], 80.00th=[10814], 90.00th=[13173], 95.00th=[22414], 00:09:45.216 | 99.00th=[49546], 99.50th=[57934], 99.90th=[67634], 99.95th=[67634], 00:09:45.216 | 99.99th=[96994] 00:09:45.216 bw ( KiB/s): min=16384, max=24576, per=27.99%, avg=20480.00, stdev=5792.62, samples=2 00:09:45.216 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:09:45.216 lat (msec) : 2=0.01%, 4=1.99%, 10=31.14%, 20=57.97%, 50=7.30% 00:09:45.216 lat (msec) : 100=1.59% 00:09:45.216 cpu : usr=2.89%, sys=6.19%, ctx=663, majf=0, minf=1 00:09:45.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:45.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.216 issued rwts: total=4986,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.216 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.216 job2: (groupid=0, jobs=1): err= 0: pid=2733598: Tue Nov 19 13:01:48 2024 00:09:45.216 read: IOPS=3116, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1006msec) 00:09:45.216 slat (nsec): min=1432, max=23545k, avg=152899.88, stdev=1120975.69 00:09:45.216 clat (usec): min=3009, max=75239, avg=18936.88, stdev=12788.05 00:09:45.216 lat (usec): min=7001, max=75265, avg=19089.78, stdev=12906.90 00:09:45.216 clat percentiles (usec): 00:09:45.216 | 1.00th=[ 8160], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[11338], 00:09:45.216 | 30.00th=[11731], 40.00th=[12125], 50.00th=[13960], 60.00th=[14484], 00:09:45.216 | 70.00th=[15533], 80.00th=[27395], 90.00th=[43779], 95.00th=[47973], 00:09:45.216 | 99.00th=[58459], 99.50th=[58459], 99.90th=[63177], 99.95th=[68682], 00:09:45.216 | 99.99th=[74974] 00:09:45.216 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:09:45.216 slat (usec): min=2, max=24113, avg=139.45, stdev=1010.08 00:09:45.216 clat (usec): min=5949, max=63669, avg=18962.05, stdev=12520.92 00:09:45.216 lat (usec): min=5962, max=63700, avg=19101.49, stdev=12628.98 00:09:45.216 clat percentiles (usec): 00:09:45.216 | 1.00th=[ 8160], 5.00th=[10683], 10.00th=[11076], 20.00th=[11469], 00:09:45.216 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12780], 60.00th=[13829], 00:09:45.216 | 70.00th=[15664], 80.00th=[25297], 90.00th=[40633], 95.00th=[45876], 00:09:45.216 | 99.00th=[62653], 99.50th=[62653], 99.90th=[62653], 99.95th=[62653], 00:09:45.216 | 99.99th=[63701] 00:09:45.216 bw ( KiB/s): min=11496, max=16656, per=19.24%, avg=14076.00, stdev=3648.67, samples=2 00:09:45.216 iops : min= 2874, max= 4164, avg=3519.00, stdev=912.17, samples=2 00:09:45.216 lat (msec) : 4=0.01%, 10=4.17%, 20=71.32%, 50=20.29%, 100=4.21% 00:09:45.216 cpu : usr=2.59%, sys=4.98%, ctx=318, majf=0, minf=2 00:09:45.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:45.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.216 issued rwts: total=3135,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.216 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.216 job3: (groupid=0, jobs=1): err= 0: pid=2733599: Tue Nov 19 13:01:48 2024 00:09:45.216 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec) 00:09:45.216 slat (nsec): min=1449, max=16315k, avg=131693.74, stdev=1004965.88 00:09:45.216 clat (usec): min=6171, max=33082, avg=16656.54, stdev=4265.78 00:09:45.216 lat (usec): min=6181, max=41565, avg=16788.24, stdev=4354.37 00:09:45.216 clat percentiles (usec): 00:09:45.216 | 1.00th=[ 7701], 5.00th=[12387], 10.00th=[12649], 20.00th=[12911], 00:09:45.216 | 30.00th=[13435], 40.00th=[15139], 50.00th=[15926], 60.00th=[16909], 00:09:45.216 | 70.00th=[17957], 80.00th=[18744], 90.00th=[22676], 95.00th=[25297], 00:09:45.216 | 99.00th=[30540], 99.50th=[31065], 99.90th=[33162], 99.95th=[33162], 00:09:45.216 | 99.99th=[33162] 00:09:45.216 write: IOPS=3448, BW=13.5MiB/s (14.1MB/s)(13.6MiB/1011msec); 0 zone resets 00:09:45.216 slat (usec): min=2, max=16141, avg=157.67, stdev=957.00 00:09:45.216 clat (usec): min=1598, max=96542, avg=22065.42, stdev=16122.67 00:09:45.216 lat (usec): min=1610, max=96553, avg=22223.10, stdev=16228.83 00:09:45.216 clat percentiles (usec): 00:09:45.216 | 1.00th=[ 4752], 5.00th=[ 8848], 10.00th=[10814], 20.00th=[12256], 00:09:45.216 | 30.00th=[13566], 40.00th=[14484], 50.00th=[15926], 60.00th=[18482], 00:09:45.216 | 70.00th=[22414], 80.00th=[24511], 90.00th=[49021], 95.00th=[55837], 00:09:45.216 | 99.00th=[89654], 99.50th=[93848], 99.90th=[96994], 99.95th=[96994], 00:09:45.216 | 99.99th=[96994] 00:09:45.216 bw ( KiB/s): min=13280, max=13592, per=18.36%, avg=13436.00, stdev=220.62, samples=2 00:09:45.216 iops : min= 3320, max= 3398, avg=3359.00, stdev=55.15, samples=2 00:09:45.216 lat (msec) : 2=0.05%, 4=0.18%, 10=3.57%, 20=68.88%, 50=22.26% 00:09:45.216 lat (msec) : 100=5.06% 00:09:45.216 cpu : usr=3.47%, sys=4.16%, ctx=255, majf=0, minf=1 00:09:45.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:45.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.216 issued rwts: total=3072,3486,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.216 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.216 00:09:45.216 Run status group 0 (all jobs): 00:09:45.216 READ: bw=67.0MiB/s (70.2MB/s), 11.9MiB/s-23.8MiB/s (12.4MB/s-24.9MB/s), io=67.7MiB (71.0MB), run=1003-1011msec 00:09:45.216 WRITE: bw=71.4MiB/s (74.9MB/s), 13.5MiB/s-24.4MiB/s (14.1MB/s-25.6MB/s), io=72.2MiB (75.7MB), run=1003-1011msec 00:09:45.216 00:09:45.216 Disk stats (read/write): 00:09:45.216 nvme0n1: ios=5171/5415, merge=0/0, ticks=54985/49872, in_queue=104857, util=94.09% 00:09:45.216 nvme0n2: ios=4133/4175, merge=0/0, ticks=58772/47372, in_queue=106144, util=96.45% 00:09:45.216 nvme0n3: ios=3013/3072, merge=0/0, ticks=28629/23755, in_queue=52384, util=90.85% 00:09:45.216 nvme0n4: ios=2606/2895, merge=0/0, ticks=42986/60120, in_queue=103106, util=96.23% 00:09:45.216 13:01:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:45.216 13:01:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2733833 00:09:45.216 13:01:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:45.216 13:01:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:45.216 [global] 00:09:45.216 thread=1 00:09:45.216 invalidate=1 00:09:45.216 rw=read 00:09:45.216 time_based=1 00:09:45.216 runtime=10 00:09:45.216 ioengine=libaio 00:09:45.216 direct=1 00:09:45.216 bs=4096 00:09:45.216 iodepth=1 00:09:45.216 norandommap=1 00:09:45.216 numjobs=1 00:09:45.216 00:09:45.216 [job0] 00:09:45.216 filename=/dev/nvme0n1 00:09:45.216 [job1] 00:09:45.216 filename=/dev/nvme0n2 00:09:45.216 [job2] 00:09:45.216 filename=/dev/nvme0n3 00:09:45.216 [job3] 00:09:45.216 filename=/dev/nvme0n4 00:09:45.216 Could not set queue depth (nvme0n1) 00:09:45.216 Could not set queue depth (nvme0n2) 00:09:45.216 Could not set queue depth (nvme0n3) 00:09:45.216 Could not set queue depth (nvme0n4) 00:09:45.475 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.475 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.475 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.475 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.475 fio-3.35 00:09:45.475 Starting 4 threads 00:09:48.009 13:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:48.268 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=44572672, buflen=4096 00:09:48.268 fio: pid=2733991, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:48.268 13:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:48.526 13:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:48.526 13:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:48.526 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=299008, buflen=4096 00:09:48.526 fio: pid=2733988, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:48.785 13:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:48.785 13:01:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:48.785 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=331776, buflen=4096 00:09:48.785 fio: pid=2733981, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:48.785 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=31731712, buflen=4096 00:09:48.785 fio: pid=2733983, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:48.785 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:48.785 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:49.045 00:09:49.045 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2733981: Tue Nov 19 13:01:52 2024 00:09:49.045 read: IOPS=25, BW=101KiB/s (104kB/s)(324KiB/3201msec) 00:09:49.045 slat (usec): min=7, max=14832, avg=253.64, stdev=1707.50 00:09:49.045 clat (usec): min=216, max=41936, avg=38988.17, stdev=8880.91 00:09:49.045 lat (usec): min=224, max=55925, avg=39244.66, stdev=9102.17 00:09:49.045 clat percentiles (usec): 00:09:49.045 | 1.00th=[ 217], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:49.045 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:49.045 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:49.045 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:49.045 | 99.99th=[41681] 00:09:49.045 bw ( KiB/s): min= 96, max= 112, per=0.46%, avg=102.00, stdev= 6.07, samples=6 00:09:49.045 iops : min= 24, max= 28, avg=25.50, stdev= 1.52, samples=6 00:09:49.045 lat (usec) : 250=2.44%, 500=2.44% 00:09:49.045 lat (msec) : 50=93.90% 00:09:49.045 cpu : usr=0.06%, sys=0.00%, ctx=84, majf=0, minf=2 00:09:49.045 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.045 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.045 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.045 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2733983: Tue Nov 19 13:01:52 2024 00:09:49.045 read: IOPS=2291, BW=9165KiB/s (9385kB/s)(30.3MiB/3381msec) 00:09:49.045 slat (usec): min=6, max=16750, avg=15.62, stdev=330.13 00:09:49.045 clat (usec): min=159, max=41118, avg=416.23, stdev=2956.89 00:09:49.045 lat (usec): min=166, max=41126, avg=431.86, stdev=2976.08 00:09:49.045 clat percentiles (usec): 00:09:49.045 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:09:49.045 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 202], 00:09:49.045 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 219], 95.00th=[ 225], 00:09:49.045 | 99.00th=[ 253], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:09:49.045 | 99.99th=[41157] 00:09:49.045 bw ( KiB/s): min= 96, max=19024, per=40.40%, avg=8978.00, stdev=8477.92, samples=6 00:09:49.045 iops : min= 24, max= 4756, avg=2244.50, stdev=2119.48, samples=6 00:09:49.045 lat (usec) : 250=98.95%, 500=0.48% 00:09:49.045 lat (msec) : 2=0.01%, 4=0.01%, 50=0.53% 00:09:49.045 cpu : usr=1.18%, sys=3.37%, ctx=7753, majf=0, minf=2 00:09:49.045 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.045 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.045 issued rwts: total=7748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.045 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2733988: Tue Nov 19 13:01:52 2024 00:09:49.045 read: IOPS=25, BW=98.8KiB/s (101kB/s)(292KiB/2956msec) 00:09:49.045 slat (usec): min=10, max=12876, avg=196.36, stdev=1494.18 00:09:49.045 clat (usec): min=433, max=45109, avg=40000.31, stdev=6705.24 00:09:49.045 lat (usec): min=459, max=54079, avg=40199.03, stdev=6901.75 00:09:49.045 clat percentiles (usec): 00:09:49.045 | 1.00th=[ 433], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:49.045 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:49.045 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:09:49.045 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:09:49.045 | 99.99th=[45351] 00:09:49.045 bw ( KiB/s): min= 96, max= 104, per=0.45%, avg=99.20, stdev= 4.38, samples=5 00:09:49.045 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:09:49.045 lat (usec) : 500=2.70% 00:09:49.045 lat (msec) : 50=95.95% 00:09:49.045 cpu : usr=0.14%, sys=0.00%, ctx=75, majf=0, minf=2 00:09:49.045 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.045 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.045 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.045 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2733991: Tue Nov 19 13:01:52 2024 00:09:49.045 read: IOPS=3990, BW=15.6MiB/s (16.3MB/s)(42.5MiB/2727msec) 00:09:49.045 slat (nsec): min=6398, max=41589, avg=8036.01, stdev=1411.90 00:09:49.045 clat (usec): min=165, max=2612, avg=238.89, stdev=43.95 00:09:49.045 lat (usec): min=175, max=2620, avg=246.93, stdev=43.79 00:09:49.045 clat percentiles (usec): 00:09:49.045 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 206], 00:09:49.045 | 30.00th=[ 219], 40.00th=[ 235], 50.00th=[ 245], 60.00th=[ 249], 00:09:49.045 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 265], 95.00th=[ 273], 00:09:49.045 | 99.00th=[ 412], 99.50th=[ 424], 99.90th=[ 445], 99.95th=[ 482], 00:09:49.045 | 99.99th=[ 611] 00:09:49.045 bw ( KiB/s): min=14192, max=18624, per=72.83%, avg=16184.00, stdev=1612.07, samples=5 00:09:49.045 iops : min= 3548, max= 4656, avg=4046.00, stdev=403.02, samples=5 00:09:49.045 lat (usec) : 250=62.51%, 500=37.46%, 750=0.01% 00:09:49.045 lat (msec) : 4=0.01% 00:09:49.045 cpu : usr=1.25%, sys=4.95%, ctx=10885, majf=0, minf=1 00:09:49.045 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.045 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.045 issued rwts: total=10883,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.045 00:09:49.045 Run status group 0 (all jobs): 00:09:49.045 READ: bw=21.7MiB/s (22.8MB/s), 98.8KiB/s-15.6MiB/s (101kB/s-16.3MB/s), io=73.4MiB (76.9MB), run=2727-3381msec 00:09:49.045 00:09:49.045 Disk stats (read/write): 00:09:49.045 nvme0n1: ios=79/0, merge=0/0, ticks=3077/0, in_queue=3077, util=95.22% 00:09:49.045 nvme0n2: ios=7712/0, merge=0/0, ticks=3115/0, in_queue=3115, util=94.68% 00:09:49.045 nvme0n3: ios=71/0, merge=0/0, ticks=2839/0, in_queue=2839, util=96.11% 00:09:49.045 nvme0n4: ios=10541/0, merge=0/0, ticks=2567/0, in_queue=2567, util=100.00% 00:09:49.045 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.045 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:49.304 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.304 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:49.563 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.563 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:49.822 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.822 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:49.822 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:49.822 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2733833 00:09:49.822 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:49.822 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:50.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.081 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:50.081 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:50.081 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:50.081 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:50.081 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:50.081 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:50.081 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:50.081 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:50.081 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:50.081 nvmf hotplug test: fio failed as expected 00:09:50.081 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:50.340 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:50.340 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:50.340 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:50.340 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:50.340 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:50.340 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:50.340 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:50.340 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:50.340 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:50.340 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:50.340 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:50.340 rmmod nvme_tcp 00:09:50.341 rmmod nvme_fabrics 00:09:50.341 rmmod nvme_keyring 00:09:50.341 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:50.341 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:50.341 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:50.341 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2731118 ']' 00:09:50.341 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2731118 00:09:50.341 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2731118 ']' 00:09:50.341 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2731118 00:09:50.341 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:50.341 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.341 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2731118 00:09:50.341 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.341 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.341 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2731118' 00:09:50.341 killing process with pid 2731118 00:09:50.341 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2731118 00:09:50.341 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2731118 00:09:50.600 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:50.600 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:50.600 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:50.600 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:50.600 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:50.600 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:50.600 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:50.600 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:50.600 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:50.600 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.600 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.600 13:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.137 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:53.137 00:09:53.137 real 0m26.970s 00:09:53.137 user 1m46.373s 00:09:53.137 sys 0m8.736s 00:09:53.137 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.137 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.137 ************************************ 00:09:53.137 END TEST nvmf_fio_target 00:09:53.137 ************************************ 00:09:53.137 13:01:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:53.137 13:01:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:53.137 13:01:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.137 13:01:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:53.137 ************************************ 00:09:53.137 START TEST nvmf_bdevio 00:09:53.137 ************************************ 00:09:53.137 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:53.137 * Looking for test storage... 00:09:53.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:53.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.137 --rc genhtml_branch_coverage=1 00:09:53.137 --rc genhtml_function_coverage=1 00:09:53.137 --rc genhtml_legend=1 00:09:53.137 --rc geninfo_all_blocks=1 00:09:53.137 --rc geninfo_unexecuted_blocks=1 00:09:53.137 00:09:53.137 ' 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:53.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.137 --rc genhtml_branch_coverage=1 00:09:53.137 --rc genhtml_function_coverage=1 00:09:53.137 --rc genhtml_legend=1 00:09:53.137 --rc geninfo_all_blocks=1 00:09:53.137 --rc geninfo_unexecuted_blocks=1 00:09:53.137 00:09:53.137 ' 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:53.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.137 --rc genhtml_branch_coverage=1 00:09:53.137 --rc genhtml_function_coverage=1 00:09:53.137 --rc genhtml_legend=1 00:09:53.137 --rc geninfo_all_blocks=1 00:09:53.137 --rc geninfo_unexecuted_blocks=1 00:09:53.137 00:09:53.137 ' 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:53.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.137 --rc genhtml_branch_coverage=1 00:09:53.137 --rc genhtml_function_coverage=1 00:09:53.137 --rc genhtml_legend=1 00:09:53.137 --rc geninfo_all_blocks=1 00:09:53.137 --rc geninfo_unexecuted_blocks=1 00:09:53.137 00:09:53.137 ' 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.137 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:53.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:53.138 13:01:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:59.710 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.710 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:59.711 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:59.711 Found net devices under 0000:86:00.0: cvl_0_0 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:59.711 Found net devices under 0000:86:00.1: cvl_0_1 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.711 13:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:59.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:09:59.711 00:09:59.711 --- 10.0.0.2 ping statistics --- 00:09:59.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.711 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:59.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:09:59.711 00:09:59.711 --- 10.0.0.1 ping statistics --- 00:09:59.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.711 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2738465 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2738465 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2738465 ']' 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.711 [2024-11-19 13:02:02.232664] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:09:59.711 [2024-11-19 13:02:02.232713] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.711 [2024-11-19 13:02:02.310114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.711 [2024-11-19 13:02:02.350398] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.711 [2024-11-19 13:02:02.350439] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.711 [2024-11-19 13:02:02.350446] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.711 [2024-11-19 13:02:02.350452] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.711 [2024-11-19 13:02:02.350457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.711 [2024-11-19 13:02:02.352073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:59.711 [2024-11-19 13:02:02.352181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:59.711 [2024-11-19 13:02:02.352210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.711 [2024-11-19 13:02:02.352211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:59.711 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.712 [2024-11-19 13:02:02.500474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.712 Malloc0 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.712 [2024-11-19 13:02:02.563327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:59.712 { 00:09:59.712 "params": { 00:09:59.712 "name": "Nvme$subsystem", 00:09:59.712 "trtype": "$TEST_TRANSPORT", 00:09:59.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:59.712 "adrfam": "ipv4", 00:09:59.712 "trsvcid": "$NVMF_PORT", 00:09:59.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:59.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:59.712 "hdgst": ${hdgst:-false}, 00:09:59.712 "ddgst": ${ddgst:-false} 00:09:59.712 }, 00:09:59.712 "method": "bdev_nvme_attach_controller" 00:09:59.712 } 00:09:59.712 EOF 00:09:59.712 )") 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:59.712 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:59.712 "params": { 00:09:59.712 "name": "Nvme1", 00:09:59.712 "trtype": "tcp", 00:09:59.712 "traddr": "10.0.0.2", 00:09:59.712 "adrfam": "ipv4", 00:09:59.712 "trsvcid": "4420", 00:09:59.712 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:59.712 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:59.712 "hdgst": false, 00:09:59.712 "ddgst": false 00:09:59.712 }, 00:09:59.712 "method": "bdev_nvme_attach_controller" 00:09:59.712 }' 00:09:59.712 [2024-11-19 13:02:02.613284] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:09:59.712 [2024-11-19 13:02:02.613326] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2738490 ] 00:09:59.712 [2024-11-19 13:02:02.688293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:59.712 [2024-11-19 13:02:02.732973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.712 [2024-11-19 13:02:02.733039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.712 [2024-11-19 13:02:02.733039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.712 I/O targets: 00:09:59.712 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:59.712 00:09:59.712 00:09:59.712 CUnit - A unit testing framework for C - Version 2.1-3 00:09:59.712 http://cunit.sourceforge.net/ 00:09:59.712 00:09:59.712 00:09:59.712 Suite: bdevio tests on: Nvme1n1 00:09:59.712 Test: blockdev write read block ...passed 00:09:59.712 Test: blockdev write zeroes read block ...passed 00:09:59.712 Test: blockdev write zeroes read no split ...passed 00:09:59.712 Test: blockdev write zeroes read split ...passed 00:09:59.971 Test: blockdev write zeroes read split partial ...passed 00:09:59.971 Test: blockdev reset ...[2024-11-19 13:02:03.090552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:59.971 [2024-11-19 13:02:03.090622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa9340 (9): Bad file descriptor 00:09:59.971 [2024-11-19 13:02:03.121078] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:59.971 passed 00:09:59.971 Test: blockdev write read 8 blocks ...passed 00:09:59.971 Test: blockdev write read size > 128k ...passed 00:09:59.971 Test: blockdev write read invalid size ...passed 00:09:59.971 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:59.971 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:59.971 Test: blockdev write read max offset ...passed 00:09:59.971 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:59.971 Test: blockdev writev readv 8 blocks ...passed 00:09:59.971 Test: blockdev writev readv 30 x 1block ...passed 00:10:00.231 Test: blockdev writev readv block ...passed 00:10:00.231 Test: blockdev writev readv size > 128k ...passed 00:10:00.231 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:00.231 Test: blockdev comparev and writev ...[2024-11-19 13:02:03.371789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:00.231 [2024-11-19 13:02:03.371817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:00.231 [2024-11-19 13:02:03.371832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:00.231 [2024-11-19 13:02:03.371840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:00.231 [2024-11-19 13:02:03.372096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:00.231 [2024-11-19 13:02:03.372107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:00.231 [2024-11-19 13:02:03.372119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:00.231 [2024-11-19 13:02:03.372129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:00.231 [2024-11-19 13:02:03.372373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:00.231 [2024-11-19 13:02:03.372382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:00.231 [2024-11-19 13:02:03.372394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:00.231 [2024-11-19 13:02:03.372401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:00.231 [2024-11-19 13:02:03.372625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:00.231 [2024-11-19 13:02:03.372635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:00.231 [2024-11-19 13:02:03.372647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:00.231 [2024-11-19 13:02:03.372653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:00.231 passed 00:10:00.231 Test: blockdev nvme passthru rw ...passed 00:10:00.231 Test: blockdev nvme passthru vendor specific ...[2024-11-19 13:02:03.455433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:00.231 [2024-11-19 13:02:03.455448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:00.231 [2024-11-19 13:02:03.455554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:00.231 [2024-11-19 13:02:03.455563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:00.231 [2024-11-19 13:02:03.455662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:00.231 [2024-11-19 13:02:03.455672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:00.231 [2024-11-19 13:02:03.455776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:00.231 [2024-11-19 13:02:03.455785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:00.231 passed 00:10:00.231 Test: blockdev nvme admin passthru ...passed 00:10:00.231 Test: blockdev copy ...passed 00:10:00.231 00:10:00.231 Run Summary: Type Total Ran Passed Failed Inactive 00:10:00.231 suites 1 1 n/a 0 0 00:10:00.231 tests 23 23 23 0 0 00:10:00.231 asserts 152 152 152 0 n/a 00:10:00.231 00:10:00.231 Elapsed time = 1.089 seconds 00:10:00.490 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:00.490 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.490 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.490 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.490 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:00.490 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:00.490 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:00.490 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:00.490 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:00.490 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:00.490 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:00.490 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:00.490 rmmod nvme_tcp 00:10:00.490 rmmod nvme_fabrics 00:10:00.490 rmmod nvme_keyring 00:10:00.490 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:00.490 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:00.490 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:00.490 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2738465 ']' 00:10:00.490 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2738465 00:10:00.490 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2738465 ']' 00:10:00.491 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2738465 00:10:00.491 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:00.491 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.491 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2738465 00:10:00.491 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:00.491 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:00.491 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2738465' 00:10:00.491 killing process with pid 2738465 00:10:00.491 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2738465 00:10:00.491 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2738465 00:10:00.750 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:00.750 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:00.750 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:00.750 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:00.750 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:00.750 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:00.750 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:00.750 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:00.750 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:00.750 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.750 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.750 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.289 13:02:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:03.289 00:10:03.289 real 0m10.061s 00:10:03.289 user 0m10.344s 00:10:03.289 sys 0m4.979s 00:10:03.289 13:02:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.289 13:02:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.289 ************************************ 00:10:03.289 END TEST nvmf_bdevio 00:10:03.289 ************************************ 00:10:03.289 13:02:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:03.289 00:10:03.289 real 4m38.614s 00:10:03.289 user 10m30.102s 00:10:03.289 sys 1m38.193s 00:10:03.289 13:02:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.289 13:02:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:03.289 ************************************ 00:10:03.289 END TEST nvmf_target_core 00:10:03.289 ************************************ 00:10:03.289 13:02:06 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:03.289 13:02:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:03.289 13:02:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.289 13:02:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:03.289 ************************************ 00:10:03.289 START TEST nvmf_target_extra 00:10:03.289 ************************************ 00:10:03.289 13:02:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:03.289 * Looking for test storage... 00:10:03.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:03.289 13:02:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:03.289 13:02:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:03.289 13:02:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:03.289 13:02:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:03.289 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.289 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.289 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.289 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.289 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.289 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.289 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.289 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:03.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.290 --rc genhtml_branch_coverage=1 00:10:03.290 --rc genhtml_function_coverage=1 00:10:03.290 --rc genhtml_legend=1 00:10:03.290 --rc geninfo_all_blocks=1 00:10:03.290 --rc geninfo_unexecuted_blocks=1 00:10:03.290 00:10:03.290 ' 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:03.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.290 --rc genhtml_branch_coverage=1 00:10:03.290 --rc genhtml_function_coverage=1 00:10:03.290 --rc genhtml_legend=1 00:10:03.290 --rc geninfo_all_blocks=1 00:10:03.290 --rc geninfo_unexecuted_blocks=1 00:10:03.290 00:10:03.290 ' 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:03.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.290 --rc genhtml_branch_coverage=1 00:10:03.290 --rc genhtml_function_coverage=1 00:10:03.290 --rc genhtml_legend=1 00:10:03.290 --rc geninfo_all_blocks=1 00:10:03.290 --rc geninfo_unexecuted_blocks=1 00:10:03.290 00:10:03.290 ' 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:03.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.290 --rc genhtml_branch_coverage=1 00:10:03.290 --rc genhtml_function_coverage=1 00:10:03.290 --rc genhtml_legend=1 00:10:03.290 --rc geninfo_all_blocks=1 00:10:03.290 --rc geninfo_unexecuted_blocks=1 00:10:03.290 00:10:03.290 ' 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:03.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:03.290 ************************************ 00:10:03.290 START TEST nvmf_example 00:10:03.290 ************************************ 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:03.290 * Looking for test storage... 00:10:03.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.290 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:03.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.291 --rc genhtml_branch_coverage=1 00:10:03.291 --rc genhtml_function_coverage=1 00:10:03.291 --rc genhtml_legend=1 00:10:03.291 --rc geninfo_all_blocks=1 00:10:03.291 --rc geninfo_unexecuted_blocks=1 00:10:03.291 00:10:03.291 ' 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:03.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.291 --rc genhtml_branch_coverage=1 00:10:03.291 --rc genhtml_function_coverage=1 00:10:03.291 --rc genhtml_legend=1 00:10:03.291 --rc geninfo_all_blocks=1 00:10:03.291 --rc geninfo_unexecuted_blocks=1 00:10:03.291 00:10:03.291 ' 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:03.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.291 --rc genhtml_branch_coverage=1 00:10:03.291 --rc genhtml_function_coverage=1 00:10:03.291 --rc genhtml_legend=1 00:10:03.291 --rc geninfo_all_blocks=1 00:10:03.291 --rc geninfo_unexecuted_blocks=1 00:10:03.291 00:10:03.291 ' 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:03.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.291 --rc genhtml_branch_coverage=1 00:10:03.291 --rc genhtml_function_coverage=1 00:10:03.291 --rc genhtml_legend=1 00:10:03.291 --rc geninfo_all_blocks=1 00:10:03.291 --rc geninfo_unexecuted_blocks=1 00:10:03.291 00:10:03.291 ' 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:03.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:03.291 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:03.292 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.292 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.292 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.292 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:03.292 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:03.292 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:03.292 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:09.865 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:09.865 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:09.865 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:09.866 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:09.866 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:09.866 Found net devices under 0000:86:00.0: cvl_0_0 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:09.866 Found net devices under 0000:86:00.1: cvl_0_1 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:09.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:09.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:10:09.866 00:10:09.866 --- 10.0.0.2 ping statistics --- 00:10:09.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.866 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:09.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:09.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:10:09.866 00:10:09.866 --- 10.0.0.1 ping statistics --- 00:10:09.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.866 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:09.866 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:09.867 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:09.867 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:09.867 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:09.867 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:09.867 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:09.867 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:09.867 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:09.867 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:09.867 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:09.867 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:09.867 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:09.867 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:09.867 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2742339 00:10:09.867 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:09.867 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:09.867 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2742339 00:10:09.867 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2742339 ']' 00:10:09.867 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.867 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.867 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.867 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.867 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:10.435 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.436 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:10.436 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:20.416 Initializing NVMe Controllers 00:10:20.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:20.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:20.416 Initialization complete. Launching workers. 00:10:20.416 ======================================================== 00:10:20.416 Latency(us) 00:10:20.416 Device Information : IOPS MiB/s Average min max 00:10:20.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18124.20 70.80 3532.50 697.79 15838.24 00:10:20.416 ======================================================== 00:10:20.416 Total : 18124.20 70.80 3532.50 697.79 15838.24 00:10:20.416 00:10:20.676 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:20.676 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:20.676 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:20.676 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:20.676 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:20.676 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:20.676 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:20.676 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:20.676 rmmod nvme_tcp 00:10:20.676 rmmod nvme_fabrics 00:10:20.676 rmmod nvme_keyring 00:10:20.676 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:20.676 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:20.676 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:20.676 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2742339 ']' 00:10:20.676 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2742339 00:10:20.676 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2742339 ']' 00:10:20.676 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2742339 00:10:20.676 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:20.676 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:20.676 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2742339 00:10:20.676 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:20.676 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:20.676 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2742339' 00:10:20.676 killing process with pid 2742339 00:10:20.676 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2742339 00:10:20.676 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2742339 00:10:20.936 nvmf threads initialize successfully 00:10:20.936 bdev subsystem init successfully 00:10:20.936 created a nvmf target service 00:10:20.936 create targets's poll groups done 00:10:20.936 all subsystems of target started 00:10:20.936 nvmf target is running 00:10:20.936 all subsystems of target stopped 00:10:20.936 destroy targets's poll groups done 00:10:20.936 destroyed the nvmf target service 00:10:20.936 bdev subsystem finish successfully 00:10:20.936 nvmf threads destroy successfully 00:10:20.936 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:20.936 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:20.936 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:20.936 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:20.936 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:20.936 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:20.936 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:20.936 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:20.936 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:20.936 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.936 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.936 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.842 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:22.842 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:22.842 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:22.842 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:23.100 00:10:23.100 real 0m19.833s 00:10:23.100 user 0m45.953s 00:10:23.100 sys 0m6.176s 00:10:23.100 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:23.101 ************************************ 00:10:23.101 END TEST nvmf_example 00:10:23.101 ************************************ 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:23.101 ************************************ 00:10:23.101 START TEST nvmf_filesystem 00:10:23.101 ************************************ 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:23.101 * Looking for test storage... 00:10:23.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:23.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.101 --rc genhtml_branch_coverage=1 00:10:23.101 --rc genhtml_function_coverage=1 00:10:23.101 --rc genhtml_legend=1 00:10:23.101 --rc geninfo_all_blocks=1 00:10:23.101 --rc geninfo_unexecuted_blocks=1 00:10:23.101 00:10:23.101 ' 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:23.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.101 --rc genhtml_branch_coverage=1 00:10:23.101 --rc genhtml_function_coverage=1 00:10:23.101 --rc genhtml_legend=1 00:10:23.101 --rc geninfo_all_blocks=1 00:10:23.101 --rc geninfo_unexecuted_blocks=1 00:10:23.101 00:10:23.101 ' 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:23.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.101 --rc genhtml_branch_coverage=1 00:10:23.101 --rc genhtml_function_coverage=1 00:10:23.101 --rc genhtml_legend=1 00:10:23.101 --rc geninfo_all_blocks=1 00:10:23.101 --rc geninfo_unexecuted_blocks=1 00:10:23.101 00:10:23.101 ' 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:23.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.101 --rc genhtml_branch_coverage=1 00:10:23.101 --rc genhtml_function_coverage=1 00:10:23.101 --rc genhtml_legend=1 00:10:23.101 --rc geninfo_all_blocks=1 00:10:23.101 --rc geninfo_unexecuted_blocks=1 00:10:23.101 00:10:23.101 ' 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:23.101 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:23.365 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:23.365 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:23.365 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:23.365 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:23.365 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:23.365 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:23.365 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:23.365 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:23.365 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:23.365 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:23.365 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:23.365 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:23.365 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:23.365 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:23.365 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:23.365 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:23.365 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:23.365 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:23.365 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:23.365 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:23.365 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:23.365 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:23.366 #define SPDK_CONFIG_H 00:10:23.366 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:23.366 #define SPDK_CONFIG_APPS 1 00:10:23.366 #define SPDK_CONFIG_ARCH native 00:10:23.366 #undef SPDK_CONFIG_ASAN 00:10:23.366 #undef SPDK_CONFIG_AVAHI 00:10:23.366 #undef SPDK_CONFIG_CET 00:10:23.366 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:23.366 #define SPDK_CONFIG_COVERAGE 1 00:10:23.366 #define SPDK_CONFIG_CROSS_PREFIX 00:10:23.366 #undef SPDK_CONFIG_CRYPTO 00:10:23.366 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:23.366 #undef SPDK_CONFIG_CUSTOMOCF 00:10:23.366 #undef SPDK_CONFIG_DAOS 00:10:23.366 #define SPDK_CONFIG_DAOS_DIR 00:10:23.366 #define SPDK_CONFIG_DEBUG 1 00:10:23.366 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:23.366 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:23.366 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:23.366 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:23.366 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:23.366 #undef SPDK_CONFIG_DPDK_UADK 00:10:23.366 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:23.366 #define SPDK_CONFIG_EXAMPLES 1 00:10:23.366 #undef SPDK_CONFIG_FC 00:10:23.366 #define SPDK_CONFIG_FC_PATH 00:10:23.366 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:23.366 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:23.366 #define SPDK_CONFIG_FSDEV 1 00:10:23.366 #undef SPDK_CONFIG_FUSE 00:10:23.366 #undef SPDK_CONFIG_FUZZER 00:10:23.366 #define SPDK_CONFIG_FUZZER_LIB 00:10:23.366 #undef SPDK_CONFIG_GOLANG 00:10:23.366 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:23.366 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:23.366 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:23.366 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:23.366 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:23.366 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:23.366 #undef SPDK_CONFIG_HAVE_LZ4 00:10:23.366 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:23.366 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:23.366 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:23.366 #define SPDK_CONFIG_IDXD 1 00:10:23.366 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:23.366 #undef SPDK_CONFIG_IPSEC_MB 00:10:23.366 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:23.366 #define SPDK_CONFIG_ISAL 1 00:10:23.366 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:23.366 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:23.366 #define SPDK_CONFIG_LIBDIR 00:10:23.366 #undef SPDK_CONFIG_LTO 00:10:23.366 #define SPDK_CONFIG_MAX_LCORES 128 00:10:23.366 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:23.366 #define SPDK_CONFIG_NVME_CUSE 1 00:10:23.366 #undef SPDK_CONFIG_OCF 00:10:23.366 #define SPDK_CONFIG_OCF_PATH 00:10:23.366 #define SPDK_CONFIG_OPENSSL_PATH 00:10:23.366 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:23.366 #define SPDK_CONFIG_PGO_DIR 00:10:23.366 #undef SPDK_CONFIG_PGO_USE 00:10:23.366 #define SPDK_CONFIG_PREFIX /usr/local 00:10:23.366 #undef SPDK_CONFIG_RAID5F 00:10:23.366 #undef SPDK_CONFIG_RBD 00:10:23.366 #define SPDK_CONFIG_RDMA 1 00:10:23.366 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:23.366 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:23.366 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:23.366 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:23.366 #define SPDK_CONFIG_SHARED 1 00:10:23.366 #undef SPDK_CONFIG_SMA 00:10:23.366 #define SPDK_CONFIG_TESTS 1 00:10:23.366 #undef SPDK_CONFIG_TSAN 00:10:23.366 #define SPDK_CONFIG_UBLK 1 00:10:23.366 #define SPDK_CONFIG_UBSAN 1 00:10:23.366 #undef SPDK_CONFIG_UNIT_TESTS 00:10:23.366 #undef SPDK_CONFIG_URING 00:10:23.366 #define SPDK_CONFIG_URING_PATH 00:10:23.366 #undef SPDK_CONFIG_URING_ZNS 00:10:23.366 #undef SPDK_CONFIG_USDT 00:10:23.366 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:23.366 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:23.366 #define SPDK_CONFIG_VFIO_USER 1 00:10:23.366 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:23.366 #define SPDK_CONFIG_VHOST 1 00:10:23.366 #define SPDK_CONFIG_VIRTIO 1 00:10:23.366 #undef SPDK_CONFIG_VTUNE 00:10:23.366 #define SPDK_CONFIG_VTUNE_DIR 00:10:23.366 #define SPDK_CONFIG_WERROR 1 00:10:23.366 #define SPDK_CONFIG_WPDK_DIR 00:10:23.366 #undef SPDK_CONFIG_XNVME 00:10:23.366 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:23.366 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:23.367 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:23.368 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2744742 ]] 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2744742 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.riU2Rl 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.riU2Rl/tests/target /tmp/spdk.riU2Rl 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=188997341184 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963961344 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6966620160 00:10:23.369 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971949568 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169748992 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192793088 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981595648 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=385024 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:23.370 * Looking for test storage... 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=188997341184 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9181212672 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:23.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:23.370 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:23.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.371 --rc genhtml_branch_coverage=1 00:10:23.371 --rc genhtml_function_coverage=1 00:10:23.371 --rc genhtml_legend=1 00:10:23.371 --rc geninfo_all_blocks=1 00:10:23.371 --rc geninfo_unexecuted_blocks=1 00:10:23.371 00:10:23.371 ' 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:23.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.371 --rc genhtml_branch_coverage=1 00:10:23.371 --rc genhtml_function_coverage=1 00:10:23.371 --rc genhtml_legend=1 00:10:23.371 --rc geninfo_all_blocks=1 00:10:23.371 --rc geninfo_unexecuted_blocks=1 00:10:23.371 00:10:23.371 ' 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:23.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.371 --rc genhtml_branch_coverage=1 00:10:23.371 --rc genhtml_function_coverage=1 00:10:23.371 --rc genhtml_legend=1 00:10:23.371 --rc geninfo_all_blocks=1 00:10:23.371 --rc geninfo_unexecuted_blocks=1 00:10:23.371 00:10:23.371 ' 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:23.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.371 --rc genhtml_branch_coverage=1 00:10:23.371 --rc genhtml_function_coverage=1 00:10:23.371 --rc genhtml_legend=1 00:10:23.371 --rc geninfo_all_blocks=1 00:10:23.371 --rc geninfo_unexecuted_blocks=1 00:10:23.371 00:10:23.371 ' 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:23.371 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:23.631 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.631 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.631 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.631 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.631 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:23.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:23.632 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:30.205 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.205 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:30.206 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:30.206 Found net devices under 0000:86:00.0: cvl_0_0 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:30.206 Found net devices under 0000:86:00.1: cvl_0_1 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:30.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:30.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:10:30.206 00:10:30.206 --- 10.0.0.2 ping statistics --- 00:10:30.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.206 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:30.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:30.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:10:30.206 00:10:30.206 --- 10.0.0.1 ping statistics --- 00:10:30.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.206 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:30.206 ************************************ 00:10:30.206 START TEST nvmf_filesystem_no_in_capsule 00:10:30.206 ************************************ 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2747989 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2747989 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2747989 ']' 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.206 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.206 [2024-11-19 13:02:32.860247] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:10:30.206 [2024-11-19 13:02:32.860296] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.206 [2024-11-19 13:02:32.938679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:30.206 [2024-11-19 13:02:32.981669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.206 [2024-11-19 13:02:32.981707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.207 [2024-11-19 13:02:32.981714] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.207 [2024-11-19 13:02:32.981720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.207 [2024-11-19 13:02:32.981726] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.207 [2024-11-19 13:02:32.983216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.207 [2024-11-19 13:02:32.983327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.207 [2024-11-19 13:02:32.983432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.207 [2024-11-19 13:02:32.983433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.207 [2024-11-19 13:02:33.120847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.207 Malloc1 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.207 [2024-11-19 13:02:33.261908] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:30.207 { 00:10:30.207 "name": "Malloc1", 00:10:30.207 "aliases": [ 00:10:30.207 "a902a302-c159-4ba9-a935-fda2837bb778" 00:10:30.207 ], 00:10:30.207 "product_name": "Malloc disk", 00:10:30.207 "block_size": 512, 00:10:30.207 "num_blocks": 1048576, 00:10:30.207 "uuid": "a902a302-c159-4ba9-a935-fda2837bb778", 00:10:30.207 "assigned_rate_limits": { 00:10:30.207 "rw_ios_per_sec": 0, 00:10:30.207 "rw_mbytes_per_sec": 0, 00:10:30.207 "r_mbytes_per_sec": 0, 00:10:30.207 "w_mbytes_per_sec": 0 00:10:30.207 }, 00:10:30.207 "claimed": true, 00:10:30.207 "claim_type": "exclusive_write", 00:10:30.207 "zoned": false, 00:10:30.207 "supported_io_types": { 00:10:30.207 "read": true, 00:10:30.207 "write": true, 00:10:30.207 "unmap": true, 00:10:30.207 "flush": true, 00:10:30.207 "reset": true, 00:10:30.207 "nvme_admin": false, 00:10:30.207 "nvme_io": false, 00:10:30.207 "nvme_io_md": false, 00:10:30.207 "write_zeroes": true, 00:10:30.207 "zcopy": true, 00:10:30.207 "get_zone_info": false, 00:10:30.207 "zone_management": false, 00:10:30.207 "zone_append": false, 00:10:30.207 "compare": false, 00:10:30.207 "compare_and_write": false, 00:10:30.207 "abort": true, 00:10:30.207 "seek_hole": false, 00:10:30.207 "seek_data": false, 00:10:30.207 "copy": true, 00:10:30.207 "nvme_iov_md": false 00:10:30.207 }, 00:10:30.207 "memory_domains": [ 00:10:30.207 { 00:10:30.207 "dma_device_id": "system", 00:10:30.207 "dma_device_type": 1 00:10:30.207 }, 00:10:30.207 { 00:10:30.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.207 "dma_device_type": 2 00:10:30.207 } 00:10:30.207 ], 00:10:30.207 "driver_specific": {} 00:10:30.207 } 00:10:30.207 ]' 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:30.207 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:31.585 13:02:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:31.585 13:02:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:31.585 13:02:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:31.585 13:02:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:31.585 13:02:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:33.489 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:33.489 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:33.489 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:33.489 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:33.489 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:33.489 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:33.489 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:33.489 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:33.489 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:33.489 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:33.489 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:33.489 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:33.489 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:33.489 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:33.489 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:33.489 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:33.489 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:33.748 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:33.748 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:35.125 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:35.125 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:35.125 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:35.125 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.125 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.125 ************************************ 00:10:35.125 START TEST filesystem_ext4 00:10:35.125 ************************************ 00:10:35.125 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:35.125 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:35.125 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:35.125 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:35.125 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:35.125 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:35.125 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:35.125 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:35.125 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:35.125 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:35.125 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:35.125 mke2fs 1.47.0 (5-Feb-2023) 00:10:35.125 Discarding device blocks: 0/522240 done 00:10:35.125 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:35.125 Filesystem UUID: 0a459ec4-f7f1-492e-982e-c0dfc3cf100c 00:10:35.125 Superblock backups stored on blocks: 00:10:35.125 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:35.125 00:10:35.125 Allocating group tables: 0/64 done 00:10:35.125 Writing inode tables: 0/64 done 00:10:35.125 Creating journal (8192 blocks): done 00:10:35.125 Writing superblocks and filesystem accounting information: 0/64 done 00:10:35.125 00:10:35.125 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:35.125 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:41.691 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:41.691 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:41.691 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:41.691 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:41.691 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:41.691 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:41.691 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2747989 00:10:41.691 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:41.691 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:41.691 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:41.691 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:41.691 00:10:41.691 real 0m5.903s 00:10:41.691 user 0m0.030s 00:10:41.691 sys 0m0.069s 00:10:41.691 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.691 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:41.691 ************************************ 00:10:41.691 END TEST filesystem_ext4 00:10:41.691 ************************************ 00:10:41.691 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:41.691 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:41.691 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.691 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.691 ************************************ 00:10:41.691 START TEST filesystem_btrfs 00:10:41.691 ************************************ 00:10:41.691 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:41.691 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:41.691 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:41.691 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:41.692 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:41.692 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:41.692 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:41.692 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:41.692 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:41.692 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:41.692 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:41.692 btrfs-progs v6.8.1 00:10:41.692 See https://btrfs.readthedocs.io for more information. 00:10:41.692 00:10:41.692 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:41.692 NOTE: several default settings have changed in version 5.15, please make sure 00:10:41.692 this does not affect your deployments: 00:10:41.692 - DUP for metadata (-m dup) 00:10:41.692 - enabled no-holes (-O no-holes) 00:10:41.692 - enabled free-space-tree (-R free-space-tree) 00:10:41.692 00:10:41.692 Label: (null) 00:10:41.692 UUID: f968b86d-c155-4595-acd7-e9c76f8e6759 00:10:41.692 Node size: 16384 00:10:41.692 Sector size: 4096 (CPU page size: 4096) 00:10:41.692 Filesystem size: 510.00MiB 00:10:41.692 Block group profiles: 00:10:41.692 Data: single 8.00MiB 00:10:41.692 Metadata: DUP 32.00MiB 00:10:41.692 System: DUP 8.00MiB 00:10:41.692 SSD detected: yes 00:10:41.692 Zoned device: no 00:10:41.692 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:41.692 Checksum: crc32c 00:10:41.692 Number of devices: 1 00:10:41.692 Devices: 00:10:41.692 ID SIZE PATH 00:10:41.692 1 510.00MiB /dev/nvme0n1p1 00:10:41.692 00:10:41.692 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:41.692 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2747989 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:41.951 00:10:41.951 real 0m1.114s 00:10:41.951 user 0m0.036s 00:10:41.951 sys 0m0.102s 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:41.951 ************************************ 00:10:41.951 END TEST filesystem_btrfs 00:10:41.951 ************************************ 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.951 ************************************ 00:10:41.951 START TEST filesystem_xfs 00:10:41.951 ************************************ 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:41.951 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:42.210 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:42.210 = sectsz=512 attr=2, projid32bit=1 00:10:42.210 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:42.210 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:42.210 data = bsize=4096 blocks=130560, imaxpct=25 00:10:42.210 = sunit=0 swidth=0 blks 00:10:42.210 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:42.210 log =internal log bsize=4096 blocks=16384, version=2 00:10:42.210 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:42.210 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:43.147 Discarding blocks...Done. 00:10:43.147 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:43.147 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:45.679 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:45.679 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:45.679 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:45.679 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:45.679 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:45.679 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:45.679 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2747989 00:10:45.679 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:45.679 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:45.679 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:45.679 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:45.679 00:10:45.679 real 0m3.499s 00:10:45.679 user 0m0.021s 00:10:45.679 sys 0m0.076s 00:10:45.679 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.679 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:45.679 ************************************ 00:10:45.679 END TEST filesystem_xfs 00:10:45.679 ************************************ 00:10:45.679 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:45.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2747989 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2747989 ']' 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2747989 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2747989 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2747989' 00:10:45.939 killing process with pid 2747989 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2747989 00:10:45.939 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2747989 00:10:46.505 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:46.505 00:10:46.505 real 0m16.798s 00:10:46.505 user 1m6.087s 00:10:46.505 sys 0m1.359s 00:10:46.505 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.506 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.506 ************************************ 00:10:46.506 END TEST nvmf_filesystem_no_in_capsule 00:10:46.506 ************************************ 00:10:46.506 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:46.506 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:46.506 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.506 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:46.506 ************************************ 00:10:46.506 START TEST nvmf_filesystem_in_capsule 00:10:46.506 ************************************ 00:10:46.506 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:46.506 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:46.506 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:46.506 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:46.506 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:46.506 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.506 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2750977 00:10:46.506 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2750977 00:10:46.506 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:46.506 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2750977 ']' 00:10:46.506 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.506 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.506 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.506 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.506 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.506 [2024-11-19 13:02:49.730399] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:10:46.506 [2024-11-19 13:02:49.730444] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.506 [2024-11-19 13:02:49.810836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.506 [2024-11-19 13:02:49.849382] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.506 [2024-11-19 13:02:49.849419] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.506 [2024-11-19 13:02:49.849427] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.506 [2024-11-19 13:02:49.849433] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.506 [2024-11-19 13:02:49.849438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.506 [2024-11-19 13:02:49.850899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.506 [2024-11-19 13:02:49.851019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.506 [2024-11-19 13:02:49.851060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.506 [2024-11-19 13:02:49.851061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.443 [2024-11-19 13:02:50.613714] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.443 Malloc1 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.443 [2024-11-19 13:02:50.758653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.443 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:47.443 { 00:10:47.443 "name": "Malloc1", 00:10:47.443 "aliases": [ 00:10:47.443 "ccfb4be8-111f-43ed-a55f-d8c5fc94b5f4" 00:10:47.443 ], 00:10:47.443 "product_name": "Malloc disk", 00:10:47.443 "block_size": 512, 00:10:47.443 "num_blocks": 1048576, 00:10:47.443 "uuid": "ccfb4be8-111f-43ed-a55f-d8c5fc94b5f4", 00:10:47.443 "assigned_rate_limits": { 00:10:47.443 "rw_ios_per_sec": 0, 00:10:47.443 "rw_mbytes_per_sec": 0, 00:10:47.444 "r_mbytes_per_sec": 0, 00:10:47.444 "w_mbytes_per_sec": 0 00:10:47.444 }, 00:10:47.444 "claimed": true, 00:10:47.444 "claim_type": "exclusive_write", 00:10:47.444 "zoned": false, 00:10:47.444 "supported_io_types": { 00:10:47.444 "read": true, 00:10:47.444 "write": true, 00:10:47.444 "unmap": true, 00:10:47.444 "flush": true, 00:10:47.444 "reset": true, 00:10:47.444 "nvme_admin": false, 00:10:47.444 "nvme_io": false, 00:10:47.444 "nvme_io_md": false, 00:10:47.444 "write_zeroes": true, 00:10:47.444 "zcopy": true, 00:10:47.444 "get_zone_info": false, 00:10:47.444 "zone_management": false, 00:10:47.444 "zone_append": false, 00:10:47.444 "compare": false, 00:10:47.444 "compare_and_write": false, 00:10:47.444 "abort": true, 00:10:47.444 "seek_hole": false, 00:10:47.444 "seek_data": false, 00:10:47.444 "copy": true, 00:10:47.444 "nvme_iov_md": false 00:10:47.444 }, 00:10:47.444 "memory_domains": [ 00:10:47.444 { 00:10:47.444 "dma_device_id": "system", 00:10:47.444 "dma_device_type": 1 00:10:47.444 }, 00:10:47.444 { 00:10:47.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.444 "dma_device_type": 2 00:10:47.444 } 00:10:47.444 ], 00:10:47.444 "driver_specific": {} 00:10:47.444 } 00:10:47.444 ]' 00:10:47.444 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:47.703 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:47.703 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:47.703 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:47.703 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:47.703 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:47.703 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:47.703 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:48.640 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:48.640 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:48.640 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:48.640 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:48.640 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:51.296 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:51.296 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:51.296 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:51.296 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:51.296 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:51.296 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:51.296 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:51.296 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:51.296 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:51.296 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:51.296 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:51.296 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:51.296 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:51.296 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:51.296 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:51.296 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:51.296 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:51.296 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:51.296 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:52.255 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:52.255 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:52.255 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:52.255 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.255 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.255 ************************************ 00:10:52.256 START TEST filesystem_in_capsule_ext4 00:10:52.256 ************************************ 00:10:52.256 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:52.256 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:52.256 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:52.256 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:52.256 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:52.256 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:52.256 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:52.256 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:52.256 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:52.256 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:52.256 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:52.256 mke2fs 1.47.0 (5-Feb-2023) 00:10:52.515 Discarding device blocks: 0/522240 done 00:10:52.515 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:52.515 Filesystem UUID: 03200876-6f61-48f7-89d0-367e6b561481 00:10:52.515 Superblock backups stored on blocks: 00:10:52.515 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:52.515 00:10:52.515 Allocating group tables: 0/64 done 00:10:52.515 Writing inode tables: 0/64 done 00:10:55.049 Creating journal (8192 blocks): done 00:10:56.931 Writing superblocks and filesystem accounting information: 0/64 done 00:10:56.931 00:10:56.931 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:56.931 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:02.201 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:02.201 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:02.201 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:02.201 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:02.201 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:02.201 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:02.201 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2750977 00:11:02.201 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:02.201 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:02.202 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:02.202 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:02.202 00:11:02.202 real 0m9.808s 00:11:02.202 user 0m0.026s 00:11:02.202 sys 0m0.078s 00:11:02.202 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.202 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:02.202 ************************************ 00:11:02.202 END TEST filesystem_in_capsule_ext4 00:11:02.202 ************************************ 00:11:02.202 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:02.202 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:02.202 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.202 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.202 ************************************ 00:11:02.202 START TEST filesystem_in_capsule_btrfs 00:11:02.202 ************************************ 00:11:02.202 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:02.202 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:02.202 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:02.202 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:02.202 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:02.202 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:02.202 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:02.202 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:02.202 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:02.202 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:02.202 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:02.461 btrfs-progs v6.8.1 00:11:02.461 See https://btrfs.readthedocs.io for more information. 00:11:02.461 00:11:02.461 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:02.461 NOTE: several default settings have changed in version 5.15, please make sure 00:11:02.461 this does not affect your deployments: 00:11:02.461 - DUP for metadata (-m dup) 00:11:02.461 - enabled no-holes (-O no-holes) 00:11:02.461 - enabled free-space-tree (-R free-space-tree) 00:11:02.461 00:11:02.461 Label: (null) 00:11:02.461 UUID: bf34081f-7c78-4ea0-8fe5-19e67832b708 00:11:02.461 Node size: 16384 00:11:02.461 Sector size: 4096 (CPU page size: 4096) 00:11:02.461 Filesystem size: 510.00MiB 00:11:02.461 Block group profiles: 00:11:02.461 Data: single 8.00MiB 00:11:02.461 Metadata: DUP 32.00MiB 00:11:02.461 System: DUP 8.00MiB 00:11:02.461 SSD detected: yes 00:11:02.461 Zoned device: no 00:11:02.461 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:02.461 Checksum: crc32c 00:11:02.461 Number of devices: 1 00:11:02.461 Devices: 00:11:02.461 ID SIZE PATH 00:11:02.461 1 510.00MiB /dev/nvme0n1p1 00:11:02.461 00:11:02.461 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:02.462 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:02.721 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:02.721 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:02.721 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:02.721 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:02.721 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:02.721 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:02.721 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2750977 00:11:02.721 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:02.721 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:02.721 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:02.721 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:02.721 00:11:02.721 real 0m0.547s 00:11:02.721 user 0m0.032s 00:11:02.721 sys 0m0.112s 00:11:02.721 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.721 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:02.721 ************************************ 00:11:02.721 END TEST filesystem_in_capsule_btrfs 00:11:02.721 ************************************ 00:11:02.721 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:02.721 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:02.721 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.721 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.721 ************************************ 00:11:02.721 START TEST filesystem_in_capsule_xfs 00:11:02.721 ************************************ 00:11:02.721 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:02.721 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:02.721 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:02.721 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:02.721 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:02.721 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:02.721 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:02.721 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:02.721 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:02.721 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:02.721 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:02.980 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:02.980 = sectsz=512 attr=2, projid32bit=1 00:11:02.980 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:02.980 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:02.980 data = bsize=4096 blocks=130560, imaxpct=25 00:11:02.980 = sunit=0 swidth=0 blks 00:11:02.980 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:02.980 log =internal log bsize=4096 blocks=16384, version=2 00:11:02.980 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:02.980 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:03.546 Discarding blocks...Done. 00:11:03.546 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:03.546 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:05.451 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:05.451 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:05.451 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:05.451 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:05.451 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:05.451 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:05.451 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2750977 00:11:05.451 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:05.451 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:05.451 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:05.451 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:05.451 00:11:05.451 real 0m2.635s 00:11:05.451 user 0m0.028s 00:11:05.451 sys 0m0.068s 00:11:05.451 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.451 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:05.451 ************************************ 00:11:05.451 END TEST filesystem_in_capsule_xfs 00:11:05.451 ************************************ 00:11:05.451 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:05.710 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:05.710 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:05.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.968 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:05.968 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:05.968 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:05.968 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.968 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:05.968 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.968 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:05.968 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:05.968 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.968 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.968 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.968 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:05.968 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2750977 00:11:05.968 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2750977 ']' 00:11:05.968 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2750977 00:11:05.968 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:05.968 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:05.968 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2750977 00:11:05.968 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:05.968 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:05.968 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2750977' 00:11:05.968 killing process with pid 2750977 00:11:05.968 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2750977 00:11:05.968 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2750977 00:11:06.227 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:06.227 00:11:06.227 real 0m19.867s 00:11:06.227 user 1m18.368s 00:11:06.227 sys 0m1.494s 00:11:06.227 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.227 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.227 ************************************ 00:11:06.227 END TEST nvmf_filesystem_in_capsule 00:11:06.227 ************************************ 00:11:06.227 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:06.227 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:06.227 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:06.227 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:06.227 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:06.227 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:06.227 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:06.227 rmmod nvme_tcp 00:11:06.227 rmmod nvme_fabrics 00:11:06.486 rmmod nvme_keyring 00:11:06.486 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:06.486 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:06.486 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:06.486 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:06.486 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:06.486 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:06.486 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:06.486 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:06.486 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:06.486 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:06.486 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:06.486 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:06.486 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:06.486 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.486 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.486 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.391 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:08.391 00:11:08.391 real 0m45.418s 00:11:08.391 user 2m26.481s 00:11:08.391 sys 0m7.612s 00:11:08.391 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.391 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:08.391 ************************************ 00:11:08.391 END TEST nvmf_filesystem 00:11:08.391 ************************************ 00:11:08.392 13:03:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:08.392 13:03:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:08.392 13:03:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.392 13:03:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:08.652 ************************************ 00:11:08.652 START TEST nvmf_target_discovery 00:11:08.652 ************************************ 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:08.652 * Looking for test storage... 00:11:08.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:08.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.652 --rc genhtml_branch_coverage=1 00:11:08.652 --rc genhtml_function_coverage=1 00:11:08.652 --rc genhtml_legend=1 00:11:08.652 --rc geninfo_all_blocks=1 00:11:08.652 --rc geninfo_unexecuted_blocks=1 00:11:08.652 00:11:08.652 ' 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:08.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.652 --rc genhtml_branch_coverage=1 00:11:08.652 --rc genhtml_function_coverage=1 00:11:08.652 --rc genhtml_legend=1 00:11:08.652 --rc geninfo_all_blocks=1 00:11:08.652 --rc geninfo_unexecuted_blocks=1 00:11:08.652 00:11:08.652 ' 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:08.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.652 --rc genhtml_branch_coverage=1 00:11:08.652 --rc genhtml_function_coverage=1 00:11:08.652 --rc genhtml_legend=1 00:11:08.652 --rc geninfo_all_blocks=1 00:11:08.652 --rc geninfo_unexecuted_blocks=1 00:11:08.652 00:11:08.652 ' 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:08.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.652 --rc genhtml_branch_coverage=1 00:11:08.652 --rc genhtml_function_coverage=1 00:11:08.652 --rc genhtml_legend=1 00:11:08.652 --rc geninfo_all_blocks=1 00:11:08.652 --rc geninfo_unexecuted_blocks=1 00:11:08.652 00:11:08.652 ' 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.652 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.653 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.653 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.653 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:08.653 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.653 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:08.653 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:08.653 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:08.653 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.653 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.653 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.653 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:08.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:08.653 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:08.653 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:08.653 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:08.653 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:08.653 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:08.653 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:08.653 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:08.653 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:08.653 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:08.653 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.653 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:08.653 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:08.653 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:08.653 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.653 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.653 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.653 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:08.653 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:08.653 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:08.653 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.224 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:15.224 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:15.224 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:15.224 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:15.225 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:15.225 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:15.225 Found net devices under 0000:86:00.0: cvl_0_0 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:15.225 Found net devices under 0000:86:00.1: cvl_0_1 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:15.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:11:15.225 00:11:15.225 --- 10.0.0.2 ping statistics --- 00:11:15.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.225 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:11:15.225 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:15.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:11:15.225 00:11:15.225 --- 10.0.0.1 ping statistics --- 00:11:15.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.226 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:11:15.226 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.226 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:15.226 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:15.226 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.226 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:15.226 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:15.226 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.226 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:15.226 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:15.226 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:15.226 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:15.226 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:15.226 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.226 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2758477 00:11:15.226 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2758477 00:11:15.226 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:15.226 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2758477 ']' 00:11:15.226 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.226 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.226 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.226 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.226 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.226 [2024-11-19 13:03:18.023188] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:11:15.226 [2024-11-19 13:03:18.023248] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.226 [2024-11-19 13:03:18.101008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.226 [2024-11-19 13:03:18.141625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.226 [2024-11-19 13:03:18.141664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.226 [2024-11-19 13:03:18.141671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.226 [2024-11-19 13:03:18.141677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.226 [2024-11-19 13:03:18.141682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.226 [2024-11-19 13:03:18.143287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.226 [2024-11-19 13:03:18.143396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.226 [2024-11-19 13:03:18.143504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.226 [2024-11-19 13:03:18.143505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.226 [2024-11-19 13:03:18.292885] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.226 Null1 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.226 [2024-11-19 13:03:18.338345] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.226 Null2 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.226 Null3 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.226 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.227 Null4 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.227 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:15.486 00:11:15.486 Discovery Log Number of Records 6, Generation counter 6 00:11:15.486 =====Discovery Log Entry 0====== 00:11:15.486 trtype: tcp 00:11:15.486 adrfam: ipv4 00:11:15.486 subtype: current discovery subsystem 00:11:15.486 treq: not required 00:11:15.486 portid: 0 00:11:15.486 trsvcid: 4420 00:11:15.486 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:15.486 traddr: 10.0.0.2 00:11:15.486 eflags: explicit discovery connections, duplicate discovery information 00:11:15.486 sectype: none 00:11:15.486 =====Discovery Log Entry 1====== 00:11:15.486 trtype: tcp 00:11:15.486 adrfam: ipv4 00:11:15.486 subtype: nvme subsystem 00:11:15.486 treq: not required 00:11:15.486 portid: 0 00:11:15.487 trsvcid: 4420 00:11:15.487 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:15.487 traddr: 10.0.0.2 00:11:15.487 eflags: none 00:11:15.487 sectype: none 00:11:15.487 =====Discovery Log Entry 2====== 00:11:15.487 trtype: tcp 00:11:15.487 adrfam: ipv4 00:11:15.487 subtype: nvme subsystem 00:11:15.487 treq: not required 00:11:15.487 portid: 0 00:11:15.487 trsvcid: 4420 00:11:15.487 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:15.487 traddr: 10.0.0.2 00:11:15.487 eflags: none 00:11:15.487 sectype: none 00:11:15.487 =====Discovery Log Entry 3====== 00:11:15.487 trtype: tcp 00:11:15.487 adrfam: ipv4 00:11:15.487 subtype: nvme subsystem 00:11:15.487 treq: not required 00:11:15.487 portid: 0 00:11:15.487 trsvcid: 4420 00:11:15.487 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:15.487 traddr: 10.0.0.2 00:11:15.487 eflags: none 00:11:15.487 sectype: none 00:11:15.487 =====Discovery Log Entry 4====== 00:11:15.487 trtype: tcp 00:11:15.487 adrfam: ipv4 00:11:15.487 subtype: nvme subsystem 00:11:15.487 treq: not required 00:11:15.487 portid: 0 00:11:15.487 trsvcid: 4420 00:11:15.487 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:15.487 traddr: 10.0.0.2 00:11:15.487 eflags: none 00:11:15.487 sectype: none 00:11:15.487 =====Discovery Log Entry 5====== 00:11:15.487 trtype: tcp 00:11:15.487 adrfam: ipv4 00:11:15.487 subtype: discovery subsystem referral 00:11:15.487 treq: not required 00:11:15.487 portid: 0 00:11:15.487 trsvcid: 4430 00:11:15.487 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:15.487 traddr: 10.0.0.2 00:11:15.487 eflags: none 00:11:15.487 sectype: none 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:15.487 Perform nvmf subsystem discovery via RPC 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.487 [ 00:11:15.487 { 00:11:15.487 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:15.487 "subtype": "Discovery", 00:11:15.487 "listen_addresses": [ 00:11:15.487 { 00:11:15.487 "trtype": "TCP", 00:11:15.487 "adrfam": "IPv4", 00:11:15.487 "traddr": "10.0.0.2", 00:11:15.487 "trsvcid": "4420" 00:11:15.487 } 00:11:15.487 ], 00:11:15.487 "allow_any_host": true, 00:11:15.487 "hosts": [] 00:11:15.487 }, 00:11:15.487 { 00:11:15.487 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:15.487 "subtype": "NVMe", 00:11:15.487 "listen_addresses": [ 00:11:15.487 { 00:11:15.487 "trtype": "TCP", 00:11:15.487 "adrfam": "IPv4", 00:11:15.487 "traddr": "10.0.0.2", 00:11:15.487 "trsvcid": "4420" 00:11:15.487 } 00:11:15.487 ], 00:11:15.487 "allow_any_host": true, 00:11:15.487 "hosts": [], 00:11:15.487 "serial_number": "SPDK00000000000001", 00:11:15.487 "model_number": "SPDK bdev Controller", 00:11:15.487 "max_namespaces": 32, 00:11:15.487 "min_cntlid": 1, 00:11:15.487 "max_cntlid": 65519, 00:11:15.487 "namespaces": [ 00:11:15.487 { 00:11:15.487 "nsid": 1, 00:11:15.487 "bdev_name": "Null1", 00:11:15.487 "name": "Null1", 00:11:15.487 "nguid": "7DBBEB3997364D238274ECF4DABA87F4", 00:11:15.487 "uuid": "7dbbeb39-9736-4d23-8274-ecf4daba87f4" 00:11:15.487 } 00:11:15.487 ] 00:11:15.487 }, 00:11:15.487 { 00:11:15.487 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:15.487 "subtype": "NVMe", 00:11:15.487 "listen_addresses": [ 00:11:15.487 { 00:11:15.487 "trtype": "TCP", 00:11:15.487 "adrfam": "IPv4", 00:11:15.487 "traddr": "10.0.0.2", 00:11:15.487 "trsvcid": "4420" 00:11:15.487 } 00:11:15.487 ], 00:11:15.487 "allow_any_host": true, 00:11:15.487 "hosts": [], 00:11:15.487 "serial_number": "SPDK00000000000002", 00:11:15.487 "model_number": "SPDK bdev Controller", 00:11:15.487 "max_namespaces": 32, 00:11:15.487 "min_cntlid": 1, 00:11:15.487 "max_cntlid": 65519, 00:11:15.487 "namespaces": [ 00:11:15.487 { 00:11:15.487 "nsid": 1, 00:11:15.487 "bdev_name": "Null2", 00:11:15.487 "name": "Null2", 00:11:15.487 "nguid": "70491077D2A24FDDA5A2B1BAC0240F51", 00:11:15.487 "uuid": "70491077-d2a2-4fdd-a5a2-b1bac0240f51" 00:11:15.487 } 00:11:15.487 ] 00:11:15.487 }, 00:11:15.487 { 00:11:15.487 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:15.487 "subtype": "NVMe", 00:11:15.487 "listen_addresses": [ 00:11:15.487 { 00:11:15.487 "trtype": "TCP", 00:11:15.487 "adrfam": "IPv4", 00:11:15.487 "traddr": "10.0.0.2", 00:11:15.487 "trsvcid": "4420" 00:11:15.487 } 00:11:15.487 ], 00:11:15.487 "allow_any_host": true, 00:11:15.487 "hosts": [], 00:11:15.487 "serial_number": "SPDK00000000000003", 00:11:15.487 "model_number": "SPDK bdev Controller", 00:11:15.487 "max_namespaces": 32, 00:11:15.487 "min_cntlid": 1, 00:11:15.487 "max_cntlid": 65519, 00:11:15.487 "namespaces": [ 00:11:15.487 { 00:11:15.487 "nsid": 1, 00:11:15.487 "bdev_name": "Null3", 00:11:15.487 "name": "Null3", 00:11:15.487 "nguid": "7E0C7972C71740419770E9F2D6FB8B91", 00:11:15.487 "uuid": "7e0c7972-c717-4041-9770-e9f2d6fb8b91" 00:11:15.487 } 00:11:15.487 ] 00:11:15.487 }, 00:11:15.487 { 00:11:15.487 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:15.487 "subtype": "NVMe", 00:11:15.487 "listen_addresses": [ 00:11:15.487 { 00:11:15.487 "trtype": "TCP", 00:11:15.487 "adrfam": "IPv4", 00:11:15.487 "traddr": "10.0.0.2", 00:11:15.487 "trsvcid": "4420" 00:11:15.487 } 00:11:15.487 ], 00:11:15.487 "allow_any_host": true, 00:11:15.487 "hosts": [], 00:11:15.487 "serial_number": "SPDK00000000000004", 00:11:15.487 "model_number": "SPDK bdev Controller", 00:11:15.487 "max_namespaces": 32, 00:11:15.487 "min_cntlid": 1, 00:11:15.487 "max_cntlid": 65519, 00:11:15.487 "namespaces": [ 00:11:15.487 { 00:11:15.487 "nsid": 1, 00:11:15.487 "bdev_name": "Null4", 00:11:15.487 "name": "Null4", 00:11:15.487 "nguid": "870F5BAF82F147C288944B460DF2FE01", 00:11:15.487 "uuid": "870f5baf-82f1-47c2-8894-4b460df2fe01" 00:11:15.487 } 00:11:15.487 ] 00:11:15.487 } 00:11:15.487 ] 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:15.487 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:15.488 rmmod nvme_tcp 00:11:15.488 rmmod nvme_fabrics 00:11:15.488 rmmod nvme_keyring 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2758477 ']' 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2758477 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2758477 ']' 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2758477 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.488 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2758477 00:11:15.747 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.747 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.747 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2758477' 00:11:15.747 killing process with pid 2758477 00:11:15.747 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2758477 00:11:15.747 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2758477 00:11:15.747 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:15.747 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:15.747 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:15.747 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:15.747 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:15.747 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:15.747 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:15.747 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:15.747 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:15.747 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.747 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.748 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:18.296 00:11:18.296 real 0m9.349s 00:11:18.296 user 0m5.612s 00:11:18.296 sys 0m4.793s 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:18.296 ************************************ 00:11:18.296 END TEST nvmf_target_discovery 00:11:18.296 ************************************ 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:18.296 ************************************ 00:11:18.296 START TEST nvmf_referrals 00:11:18.296 ************************************ 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:18.296 * Looking for test storage... 00:11:18.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:18.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.296 --rc genhtml_branch_coverage=1 00:11:18.296 --rc genhtml_function_coverage=1 00:11:18.296 --rc genhtml_legend=1 00:11:18.296 --rc geninfo_all_blocks=1 00:11:18.296 --rc geninfo_unexecuted_blocks=1 00:11:18.296 00:11:18.296 ' 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:18.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.296 --rc genhtml_branch_coverage=1 00:11:18.296 --rc genhtml_function_coverage=1 00:11:18.296 --rc genhtml_legend=1 00:11:18.296 --rc geninfo_all_blocks=1 00:11:18.296 --rc geninfo_unexecuted_blocks=1 00:11:18.296 00:11:18.296 ' 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:18.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.296 --rc genhtml_branch_coverage=1 00:11:18.296 --rc genhtml_function_coverage=1 00:11:18.296 --rc genhtml_legend=1 00:11:18.296 --rc geninfo_all_blocks=1 00:11:18.296 --rc geninfo_unexecuted_blocks=1 00:11:18.296 00:11:18.296 ' 00:11:18.296 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:18.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.296 --rc genhtml_branch_coverage=1 00:11:18.296 --rc genhtml_function_coverage=1 00:11:18.296 --rc genhtml_legend=1 00:11:18.296 --rc geninfo_all_blocks=1 00:11:18.296 --rc geninfo_unexecuted_blocks=1 00:11:18.297 00:11:18.297 ' 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:18.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:18.297 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:24.871 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:24.871 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.871 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:24.872 Found net devices under 0000:86:00.0: cvl_0_0 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:24.872 Found net devices under 0000:86:00.1: cvl_0_1 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:24.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:11:24.872 00:11:24.872 --- 10.0.0.2 ping statistics --- 00:11:24.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.872 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:24.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:11:24.872 00:11:24.872 --- 10.0.0.1 ping statistics --- 00:11:24.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.872 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2762203 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2762203 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2762203 ']' 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.872 [2024-11-19 13:03:27.443781] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:11:24.872 [2024-11-19 13:03:27.443825] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.872 [2024-11-19 13:03:27.522082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:24.872 [2024-11-19 13:03:27.563150] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.872 [2024-11-19 13:03:27.563189] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.872 [2024-11-19 13:03:27.563196] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.872 [2024-11-19 13:03:27.563204] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.872 [2024-11-19 13:03:27.563210] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.872 [2024-11-19 13:03:27.564691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.872 [2024-11-19 13:03:27.564799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.872 [2024-11-19 13:03:27.564913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.872 [2024-11-19 13:03:27.564914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.872 [2024-11-19 13:03:27.709086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.872 [2024-11-19 13:03:27.722495] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.872 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:24.873 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:24.873 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:25.133 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:25.392 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:25.392 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:25.392 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:25.392 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:25.392 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:25.392 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:25.392 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:25.392 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:25.652 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:25.911 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:25.911 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:25.911 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:25.911 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:25.911 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:25.911 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:25.911 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:26.169 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:26.169 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:26.169 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:26.169 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:26.169 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:26.169 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:26.169 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:26.169 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:26.169 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.169 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.169 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.169 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:26.169 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:26.169 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.169 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:26.169 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.428 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:26.428 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:26.428 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:26.428 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:26.428 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:26.428 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:26.428 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:26.428 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:26.428 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:26.428 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:26.428 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:26.428 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:26.428 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:26.428 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:26.428 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:26.428 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:26.428 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:26.428 rmmod nvme_tcp 00:11:26.428 rmmod nvme_fabrics 00:11:26.687 rmmod nvme_keyring 00:11:26.687 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.687 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:26.687 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:26.687 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2762203 ']' 00:11:26.687 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2762203 00:11:26.687 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2762203 ']' 00:11:26.687 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2762203 00:11:26.687 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:26.687 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:26.687 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2762203 00:11:26.687 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:26.687 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:26.687 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2762203' 00:11:26.687 killing process with pid 2762203 00:11:26.687 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2762203 00:11:26.687 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2762203 00:11:26.945 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:26.945 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:26.945 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:26.945 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:26.945 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:26.945 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:26.945 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:26.945 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:26.945 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:26.945 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.945 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.945 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.851 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:28.851 00:11:28.851 real 0m10.946s 00:11:28.851 user 0m12.594s 00:11:28.851 sys 0m5.212s 00:11:28.851 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.851 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.851 ************************************ 00:11:28.851 END TEST nvmf_referrals 00:11:28.851 ************************************ 00:11:28.851 13:03:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:28.851 13:03:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:28.851 13:03:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.851 13:03:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:28.851 ************************************ 00:11:28.851 START TEST nvmf_connect_disconnect 00:11:28.851 ************************************ 00:11:28.851 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:29.111 * Looking for test storage... 00:11:29.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.111 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:29.111 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:11:29.111 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:29.111 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:29.111 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.111 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.111 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.111 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.111 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.111 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.111 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.111 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.111 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.111 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.111 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.111 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:29.111 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:29.111 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.111 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:29.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.112 --rc genhtml_branch_coverage=1 00:11:29.112 --rc genhtml_function_coverage=1 00:11:29.112 --rc genhtml_legend=1 00:11:29.112 --rc geninfo_all_blocks=1 00:11:29.112 --rc geninfo_unexecuted_blocks=1 00:11:29.112 00:11:29.112 ' 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:29.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.112 --rc genhtml_branch_coverage=1 00:11:29.112 --rc genhtml_function_coverage=1 00:11:29.112 --rc genhtml_legend=1 00:11:29.112 --rc geninfo_all_blocks=1 00:11:29.112 --rc geninfo_unexecuted_blocks=1 00:11:29.112 00:11:29.112 ' 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:29.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.112 --rc genhtml_branch_coverage=1 00:11:29.112 --rc genhtml_function_coverage=1 00:11:29.112 --rc genhtml_legend=1 00:11:29.112 --rc geninfo_all_blocks=1 00:11:29.112 --rc geninfo_unexecuted_blocks=1 00:11:29.112 00:11:29.112 ' 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:29.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.112 --rc genhtml_branch_coverage=1 00:11:29.112 --rc genhtml_function_coverage=1 00:11:29.112 --rc genhtml_legend=1 00:11:29.112 --rc geninfo_all_blocks=1 00:11:29.112 --rc geninfo_unexecuted_blocks=1 00:11:29.112 00:11:29.112 ' 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.112 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.113 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.113 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:29.113 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:29.113 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:29.113 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:29.113 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.113 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:29.113 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:29.113 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:29.113 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.113 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.113 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.113 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:29.113 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:29.113 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:29.113 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:35.687 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:35.687 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:35.687 Found net devices under 0000:86:00.0: cvl_0_0 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:35.687 Found net devices under 0000:86:00.1: cvl_0_1 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.687 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:35.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:11:35.688 00:11:35.688 --- 10.0.0.2 ping statistics --- 00:11:35.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.688 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:11:35.688 00:11:35.688 --- 10.0.0.1 ping statistics --- 00:11:35.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.688 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2766125 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2766125 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2766125 ']' 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.688 [2024-11-19 13:03:38.452257] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:11:35.688 [2024-11-19 13:03:38.452309] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.688 [2024-11-19 13:03:38.531808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.688 [2024-11-19 13:03:38.575388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.688 [2024-11-19 13:03:38.575429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.688 [2024-11-19 13:03:38.575436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.688 [2024-11-19 13:03:38.575442] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.688 [2024-11-19 13:03:38.575447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.688 [2024-11-19 13:03:38.577071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.688 [2024-11-19 13:03:38.577188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.688 [2024-11-19 13:03:38.577296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.688 [2024-11-19 13:03:38.577297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.688 [2024-11-19 13:03:38.718789] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.688 [2024-11-19 13:03:38.784793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:35.688 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:38.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:52.133 rmmod nvme_tcp 00:11:52.133 rmmod nvme_fabrics 00:11:52.133 rmmod nvme_keyring 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2766125 ']' 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2766125 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2766125 ']' 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2766125 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2766125 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2766125' 00:11:52.133 killing process with pid 2766125 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2766125 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2766125 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.133 13:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.039 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:54.039 00:11:54.039 real 0m25.157s 00:11:54.039 user 1m8.061s 00:11:54.039 sys 0m5.876s 00:11:54.039 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.039 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:54.039 ************************************ 00:11:54.040 END TEST nvmf_connect_disconnect 00:11:54.040 ************************************ 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:54.300 ************************************ 00:11:54.300 START TEST nvmf_multitarget 00:11:54.300 ************************************ 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:54.300 * Looking for test storage... 00:11:54.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:54.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.300 --rc genhtml_branch_coverage=1 00:11:54.300 --rc genhtml_function_coverage=1 00:11:54.300 --rc genhtml_legend=1 00:11:54.300 --rc geninfo_all_blocks=1 00:11:54.300 --rc geninfo_unexecuted_blocks=1 00:11:54.300 00:11:54.300 ' 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:54.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.300 --rc genhtml_branch_coverage=1 00:11:54.300 --rc genhtml_function_coverage=1 00:11:54.300 --rc genhtml_legend=1 00:11:54.300 --rc geninfo_all_blocks=1 00:11:54.300 --rc geninfo_unexecuted_blocks=1 00:11:54.300 00:11:54.300 ' 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:54.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.300 --rc genhtml_branch_coverage=1 00:11:54.300 --rc genhtml_function_coverage=1 00:11:54.300 --rc genhtml_legend=1 00:11:54.300 --rc geninfo_all_blocks=1 00:11:54.300 --rc geninfo_unexecuted_blocks=1 00:11:54.300 00:11:54.300 ' 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:54.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.300 --rc genhtml_branch_coverage=1 00:11:54.300 --rc genhtml_function_coverage=1 00:11:54.300 --rc genhtml_legend=1 00:11:54.300 --rc geninfo_all_blocks=1 00:11:54.300 --rc geninfo_unexecuted_blocks=1 00:11:54.300 00:11:54.300 ' 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.300 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:54.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:54.301 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:00.975 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.975 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:00.975 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:00.975 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:00.975 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:00.975 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:00.975 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:00.975 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:00.975 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:00.975 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:00.975 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:00.975 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:00.975 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:00.975 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:00.975 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:00.975 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.975 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:00.976 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:00.976 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:00.976 Found net devices under 0000:86:00.0: cvl_0_0 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:00.976 Found net devices under 0000:86:00.1: cvl_0_1 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:00.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:12:00.976 00:12:00.976 --- 10.0.0.2 ping statistics --- 00:12:00.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.976 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:00.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:12:00.976 00:12:00.976 --- 10.0.0.1 ping statistics --- 00:12:00.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.976 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:00.976 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:00.977 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2772512 00:12:00.977 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:00.977 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2772512 00:12:00.977 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2772512 ']' 00:12:00.977 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.977 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.977 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.977 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.977 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:00.977 [2024-11-19 13:04:03.712750] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:12:00.977 [2024-11-19 13:04:03.712795] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.977 [2024-11-19 13:04:03.791780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.977 [2024-11-19 13:04:03.834699] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.977 [2024-11-19 13:04:03.834740] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.977 [2024-11-19 13:04:03.834747] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.977 [2024-11-19 13:04:03.834754] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.977 [2024-11-19 13:04:03.834759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.977 [2024-11-19 13:04:03.836212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.977 [2024-11-19 13:04:03.836325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.977 [2024-11-19 13:04:03.836431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.977 [2024-11-19 13:04:03.836431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.977 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.977 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:00.977 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:00.977 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:00.977 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:00.977 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.977 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:00.977 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:00.977 13:04:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:00.977 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:00.977 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:00.977 "nvmf_tgt_1" 00:12:00.977 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:00.977 "nvmf_tgt_2" 00:12:00.977 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:00.977 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:01.237 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:01.237 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:01.237 true 00:12:01.237 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:01.237 true 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:01.496 rmmod nvme_tcp 00:12:01.496 rmmod nvme_fabrics 00:12:01.496 rmmod nvme_keyring 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2772512 ']' 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2772512 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2772512 ']' 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2772512 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2772512 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2772512' 00:12:01.496 killing process with pid 2772512 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2772512 00:12:01.496 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2772512 00:12:01.756 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:01.756 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:01.756 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:01.756 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:01.756 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:01.756 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:01.756 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:01.756 13:04:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:01.756 13:04:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:01.756 13:04:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.756 13:04:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.756 13:04:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:04.294 00:12:04.294 real 0m9.619s 00:12:04.294 user 0m7.158s 00:12:04.294 sys 0m4.906s 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:04.294 ************************************ 00:12:04.294 END TEST nvmf_multitarget 00:12:04.294 ************************************ 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:04.294 ************************************ 00:12:04.294 START TEST nvmf_rpc 00:12:04.294 ************************************ 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:04.294 * Looking for test storage... 00:12:04.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:04.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.294 --rc genhtml_branch_coverage=1 00:12:04.294 --rc genhtml_function_coverage=1 00:12:04.294 --rc genhtml_legend=1 00:12:04.294 --rc geninfo_all_blocks=1 00:12:04.294 --rc geninfo_unexecuted_blocks=1 00:12:04.294 00:12:04.294 ' 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:04.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.294 --rc genhtml_branch_coverage=1 00:12:04.294 --rc genhtml_function_coverage=1 00:12:04.294 --rc genhtml_legend=1 00:12:04.294 --rc geninfo_all_blocks=1 00:12:04.294 --rc geninfo_unexecuted_blocks=1 00:12:04.294 00:12:04.294 ' 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:04.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.294 --rc genhtml_branch_coverage=1 00:12:04.294 --rc genhtml_function_coverage=1 00:12:04.294 --rc genhtml_legend=1 00:12:04.294 --rc geninfo_all_blocks=1 00:12:04.294 --rc geninfo_unexecuted_blocks=1 00:12:04.294 00:12:04.294 ' 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:04.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.294 --rc genhtml_branch_coverage=1 00:12:04.294 --rc genhtml_function_coverage=1 00:12:04.294 --rc genhtml_legend=1 00:12:04.294 --rc geninfo_all_blocks=1 00:12:04.294 --rc geninfo_unexecuted_blocks=1 00:12:04.294 00:12:04.294 ' 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.294 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:04.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:04.295 13:04:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:10.866 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:10.866 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.866 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:10.867 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:10.867 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:10.867 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:10.867 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:10.867 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.867 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:10.867 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.867 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:10.867 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:10.867 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.867 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:10.867 Found net devices under 0000:86:00.0: cvl_0_0 00:12:10.867 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:10.867 Found net devices under 0000:86:00.1: cvl_0_1 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:10.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:10.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:12:10.867 00:12:10.867 --- 10.0.0.2 ping statistics --- 00:12:10.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.867 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:10.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:10.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:12:10.867 00:12:10.867 --- 10.0.0.1 ping statistics --- 00:12:10.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.867 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2776304 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2776304 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2776304 ']' 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:10.867 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.867 [2024-11-19 13:04:13.350128] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:12:10.867 [2024-11-19 13:04:13.350180] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.867 [2024-11-19 13:04:13.428212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:10.867 [2024-11-19 13:04:13.471613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.867 [2024-11-19 13:04:13.471652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.867 [2024-11-19 13:04:13.471659] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.867 [2024-11-19 13:04:13.471665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.867 [2024-11-19 13:04:13.471670] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.867 [2024-11-19 13:04:13.473240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.867 [2024-11-19 13:04:13.473276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.867 [2024-11-19 13:04:13.473382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.867 [2024-11-19 13:04:13.473383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:10.868 "tick_rate": 2300000000, 00:12:10.868 "poll_groups": [ 00:12:10.868 { 00:12:10.868 "name": "nvmf_tgt_poll_group_000", 00:12:10.868 "admin_qpairs": 0, 00:12:10.868 "io_qpairs": 0, 00:12:10.868 "current_admin_qpairs": 0, 00:12:10.868 "current_io_qpairs": 0, 00:12:10.868 "pending_bdev_io": 0, 00:12:10.868 "completed_nvme_io": 0, 00:12:10.868 "transports": [] 00:12:10.868 }, 00:12:10.868 { 00:12:10.868 "name": "nvmf_tgt_poll_group_001", 00:12:10.868 "admin_qpairs": 0, 00:12:10.868 "io_qpairs": 0, 00:12:10.868 "current_admin_qpairs": 0, 00:12:10.868 "current_io_qpairs": 0, 00:12:10.868 "pending_bdev_io": 0, 00:12:10.868 "completed_nvme_io": 0, 00:12:10.868 "transports": [] 00:12:10.868 }, 00:12:10.868 { 00:12:10.868 "name": "nvmf_tgt_poll_group_002", 00:12:10.868 "admin_qpairs": 0, 00:12:10.868 "io_qpairs": 0, 00:12:10.868 "current_admin_qpairs": 0, 00:12:10.868 "current_io_qpairs": 0, 00:12:10.868 "pending_bdev_io": 0, 00:12:10.868 "completed_nvme_io": 0, 00:12:10.868 "transports": [] 00:12:10.868 }, 00:12:10.868 { 00:12:10.868 "name": "nvmf_tgt_poll_group_003", 00:12:10.868 "admin_qpairs": 0, 00:12:10.868 "io_qpairs": 0, 00:12:10.868 "current_admin_qpairs": 0, 00:12:10.868 "current_io_qpairs": 0, 00:12:10.868 "pending_bdev_io": 0, 00:12:10.868 "completed_nvme_io": 0, 00:12:10.868 "transports": [] 00:12:10.868 } 00:12:10.868 ] 00:12:10.868 }' 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.868 [2024-11-19 13:04:13.719187] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:10.868 "tick_rate": 2300000000, 00:12:10.868 "poll_groups": [ 00:12:10.868 { 00:12:10.868 "name": "nvmf_tgt_poll_group_000", 00:12:10.868 "admin_qpairs": 0, 00:12:10.868 "io_qpairs": 0, 00:12:10.868 "current_admin_qpairs": 0, 00:12:10.868 "current_io_qpairs": 0, 00:12:10.868 "pending_bdev_io": 0, 00:12:10.868 "completed_nvme_io": 0, 00:12:10.868 "transports": [ 00:12:10.868 { 00:12:10.868 "trtype": "TCP" 00:12:10.868 } 00:12:10.868 ] 00:12:10.868 }, 00:12:10.868 { 00:12:10.868 "name": "nvmf_tgt_poll_group_001", 00:12:10.868 "admin_qpairs": 0, 00:12:10.868 "io_qpairs": 0, 00:12:10.868 "current_admin_qpairs": 0, 00:12:10.868 "current_io_qpairs": 0, 00:12:10.868 "pending_bdev_io": 0, 00:12:10.868 "completed_nvme_io": 0, 00:12:10.868 "transports": [ 00:12:10.868 { 00:12:10.868 "trtype": "TCP" 00:12:10.868 } 00:12:10.868 ] 00:12:10.868 }, 00:12:10.868 { 00:12:10.868 "name": "nvmf_tgt_poll_group_002", 00:12:10.868 "admin_qpairs": 0, 00:12:10.868 "io_qpairs": 0, 00:12:10.868 "current_admin_qpairs": 0, 00:12:10.868 "current_io_qpairs": 0, 00:12:10.868 "pending_bdev_io": 0, 00:12:10.868 "completed_nvme_io": 0, 00:12:10.868 "transports": [ 00:12:10.868 { 00:12:10.868 "trtype": "TCP" 00:12:10.868 } 00:12:10.868 ] 00:12:10.868 }, 00:12:10.868 { 00:12:10.868 "name": "nvmf_tgt_poll_group_003", 00:12:10.868 "admin_qpairs": 0, 00:12:10.868 "io_qpairs": 0, 00:12:10.868 "current_admin_qpairs": 0, 00:12:10.868 "current_io_qpairs": 0, 00:12:10.868 "pending_bdev_io": 0, 00:12:10.868 "completed_nvme_io": 0, 00:12:10.868 "transports": [ 00:12:10.868 { 00:12:10.868 "trtype": "TCP" 00:12:10.868 } 00:12:10.868 ] 00:12:10.868 } 00:12:10.868 ] 00:12:10.868 }' 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.868 Malloc1 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.868 [2024-11-19 13:04:13.906599] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:10.868 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.869 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:10.869 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.869 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:10.869 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.869 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:10.869 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:10.869 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:10.869 [2024-11-19 13:04:13.941205] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:10.869 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:10.869 could not add new controller: failed to write to nvme-fabrics device 00:12:10.869 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:10.869 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:10.869 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:10.869 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:10.869 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:10.869 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.869 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.869 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.869 13:04:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:11.804 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:11.804 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:11.804 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:11.804 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:11.804 13:04:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:14.335 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:14.335 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:14.335 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:14.335 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:14.335 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.335 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:14.335 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:14.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:14.336 [2024-11-19 13:04:17.278845] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:14.336 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:14.336 could not add new controller: failed to write to nvme-fabrics device 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.336 13:04:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:15.271 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:15.271 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:15.271 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:15.271 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:15.271 13:04:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:17.175 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:17.175 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:17.175 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.175 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:17.175 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.175 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:17.175 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:17.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.434 [2024-11-19 13:04:20.643746] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.434 13:04:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:18.811 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:18.811 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:18.811 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:18.811 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:18.811 13:04:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:20.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.715 [2024-11-19 13:04:23.988106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.715 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:20.715 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.715 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.715 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.715 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.092 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:22.092 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:22.092 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.092 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:22.092 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.996 [2024-11-19 13:04:27.339774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.996 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.372 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:25.372 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:25.372 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:25.372 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:25.372 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:27.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.287 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.288 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.288 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.288 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.288 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.288 [2024-11-19 13:04:30.643304] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.288 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.288 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:27.288 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.288 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.288 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.288 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:27.288 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.288 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.546 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.546 13:04:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:28.481 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:28.481 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:28.481 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.481 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:28.481 13:04:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:31.014 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:31.014 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:31.014 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.014 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:31.014 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.014 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:31.014 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.014 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:31.014 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:31.014 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:31.014 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.014 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:31.014 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.014 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:31.014 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:31.014 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.014 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.014 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.014 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:31.014 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.015 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.015 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.015 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:31.015 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:31.015 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.015 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.015 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.015 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.015 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.015 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.015 [2024-11-19 13:04:33.974346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.015 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.015 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:31.015 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.015 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.015 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.015 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:31.015 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.015 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.015 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.015 13:04:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.957 13:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:31.957 13:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:31.957 13:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.957 13:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:31.957 13:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:33.861 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:33.861 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:33.861 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.862 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:33.862 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.862 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:33.862 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.862 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.862 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:33.862 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:33.862 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.862 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:33.862 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.862 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:33.862 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:33.862 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.121 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.121 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.121 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.121 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.121 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.121 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.121 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:34.121 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.121 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.121 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.121 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.121 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.121 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.121 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.121 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.121 [2024-11-19 13:04:37.278423] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.121 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.121 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.121 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.121 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.121 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.121 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.121 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.121 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.121 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.122 [2024-11-19 13:04:37.326499] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.122 [2024-11-19 13:04:37.374639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.122 [2024-11-19 13:04:37.422815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.122 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.123 [2024-11-19 13:04:37.470988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.123 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.123 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.123 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.123 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.123 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.123 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.123 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.123 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.123 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.123 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.123 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.123 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:34.382 "tick_rate": 2300000000, 00:12:34.382 "poll_groups": [ 00:12:34.382 { 00:12:34.382 "name": "nvmf_tgt_poll_group_000", 00:12:34.382 "admin_qpairs": 2, 00:12:34.382 "io_qpairs": 168, 00:12:34.382 "current_admin_qpairs": 0, 00:12:34.382 "current_io_qpairs": 0, 00:12:34.382 "pending_bdev_io": 0, 00:12:34.382 "completed_nvme_io": 220, 00:12:34.382 "transports": [ 00:12:34.382 { 00:12:34.382 "trtype": "TCP" 00:12:34.382 } 00:12:34.382 ] 00:12:34.382 }, 00:12:34.382 { 00:12:34.382 "name": "nvmf_tgt_poll_group_001", 00:12:34.382 "admin_qpairs": 2, 00:12:34.382 "io_qpairs": 168, 00:12:34.382 "current_admin_qpairs": 0, 00:12:34.382 "current_io_qpairs": 0, 00:12:34.382 "pending_bdev_io": 0, 00:12:34.382 "completed_nvme_io": 317, 00:12:34.382 "transports": [ 00:12:34.382 { 00:12:34.382 "trtype": "TCP" 00:12:34.382 } 00:12:34.382 ] 00:12:34.382 }, 00:12:34.382 { 00:12:34.382 "name": "nvmf_tgt_poll_group_002", 00:12:34.382 "admin_qpairs": 1, 00:12:34.382 "io_qpairs": 168, 00:12:34.382 "current_admin_qpairs": 0, 00:12:34.382 "current_io_qpairs": 0, 00:12:34.382 "pending_bdev_io": 0, 00:12:34.382 "completed_nvme_io": 219, 00:12:34.382 "transports": [ 00:12:34.382 { 00:12:34.382 "trtype": "TCP" 00:12:34.382 } 00:12:34.382 ] 00:12:34.382 }, 00:12:34.382 { 00:12:34.382 "name": "nvmf_tgt_poll_group_003", 00:12:34.382 "admin_qpairs": 2, 00:12:34.382 "io_qpairs": 168, 00:12:34.382 "current_admin_qpairs": 0, 00:12:34.382 "current_io_qpairs": 0, 00:12:34.382 "pending_bdev_io": 0, 00:12:34.382 "completed_nvme_io": 266, 00:12:34.382 "transports": [ 00:12:34.382 { 00:12:34.382 "trtype": "TCP" 00:12:34.382 } 00:12:34.382 ] 00:12:34.382 } 00:12:34.382 ] 00:12:34.382 }' 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:34.382 rmmod nvme_tcp 00:12:34.382 rmmod nvme_fabrics 00:12:34.382 rmmod nvme_keyring 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2776304 ']' 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2776304 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2776304 ']' 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2776304 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2776304 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2776304' 00:12:34.382 killing process with pid 2776304 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2776304 00:12:34.382 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2776304 00:12:34.642 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:34.642 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:34.642 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:34.642 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:34.642 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:34.642 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:34.642 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:34.642 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:34.642 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:34.642 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.642 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.642 13:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.181 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:37.181 00:12:37.181 real 0m32.855s 00:12:37.181 user 1m39.167s 00:12:37.181 sys 0m6.440s 00:12:37.181 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.181 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.181 ************************************ 00:12:37.181 END TEST nvmf_rpc 00:12:37.181 ************************************ 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:37.181 ************************************ 00:12:37.181 START TEST nvmf_invalid 00:12:37.181 ************************************ 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:37.181 * Looking for test storage... 00:12:37.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:37.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.181 --rc genhtml_branch_coverage=1 00:12:37.181 --rc genhtml_function_coverage=1 00:12:37.181 --rc genhtml_legend=1 00:12:37.181 --rc geninfo_all_blocks=1 00:12:37.181 --rc geninfo_unexecuted_blocks=1 00:12:37.181 00:12:37.181 ' 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:37.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.181 --rc genhtml_branch_coverage=1 00:12:37.181 --rc genhtml_function_coverage=1 00:12:37.181 --rc genhtml_legend=1 00:12:37.181 --rc geninfo_all_blocks=1 00:12:37.181 --rc geninfo_unexecuted_blocks=1 00:12:37.181 00:12:37.181 ' 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:37.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.181 --rc genhtml_branch_coverage=1 00:12:37.181 --rc genhtml_function_coverage=1 00:12:37.181 --rc genhtml_legend=1 00:12:37.181 --rc geninfo_all_blocks=1 00:12:37.181 --rc geninfo_unexecuted_blocks=1 00:12:37.181 00:12:37.181 ' 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:37.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.181 --rc genhtml_branch_coverage=1 00:12:37.181 --rc genhtml_function_coverage=1 00:12:37.181 --rc genhtml_legend=1 00:12:37.181 --rc geninfo_all_blocks=1 00:12:37.181 --rc geninfo_unexecuted_blocks=1 00:12:37.181 00:12:37.181 ' 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.181 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:37.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:37.182 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:43.753 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:43.753 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:43.753 Found net devices under 0000:86:00.0: cvl_0_0 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:43.753 Found net devices under 0000:86:00.1: cvl_0_1 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:43.753 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:43.754 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:43.754 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:43.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:12:43.754 00:12:43.754 --- 10.0.0.2 ping statistics --- 00:12:43.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.754 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:43.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:12:43.754 00:12:43.754 --- 10.0.0.1 ping statistics --- 00:12:43.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.754 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2783979 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2783979 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2783979 ']' 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:43.754 [2024-11-19 13:04:46.275399] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:12:43.754 [2024-11-19 13:04:46.275441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.754 [2024-11-19 13:04:46.353525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:43.754 [2024-11-19 13:04:46.396953] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:43.754 [2024-11-19 13:04:46.396990] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:43.754 [2024-11-19 13:04:46.396998] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:43.754 [2024-11-19 13:04:46.397003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:43.754 [2024-11-19 13:04:46.397009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:43.754 [2024-11-19 13:04:46.398465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.754 [2024-11-19 13:04:46.398577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:43.754 [2024-11-19 13:04:46.398683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.754 [2024-11-19 13:04:46.398684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode5296 00:12:43.754 [2024-11-19 13:04:46.708476] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:43.754 { 00:12:43.754 "nqn": "nqn.2016-06.io.spdk:cnode5296", 00:12:43.754 "tgt_name": "foobar", 00:12:43.754 "method": "nvmf_create_subsystem", 00:12:43.754 "req_id": 1 00:12:43.754 } 00:12:43.754 Got JSON-RPC error response 00:12:43.754 response: 00:12:43.754 { 00:12:43.754 "code": -32603, 00:12:43.754 "message": "Unable to find target foobar" 00:12:43.754 }' 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:43.754 { 00:12:43.754 "nqn": "nqn.2016-06.io.spdk:cnode5296", 00:12:43.754 "tgt_name": "foobar", 00:12:43.754 "method": "nvmf_create_subsystem", 00:12:43.754 "req_id": 1 00:12:43.754 } 00:12:43.754 Got JSON-RPC error response 00:12:43.754 response: 00:12:43.754 { 00:12:43.754 "code": -32603, 00:12:43.754 "message": "Unable to find target foobar" 00:12:43.754 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode5656 00:12:43.754 [2024-11-19 13:04:46.909162] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5656: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:43.754 { 00:12:43.754 "nqn": "nqn.2016-06.io.spdk:cnode5656", 00:12:43.754 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:43.754 "method": "nvmf_create_subsystem", 00:12:43.754 "req_id": 1 00:12:43.754 } 00:12:43.754 Got JSON-RPC error response 00:12:43.754 response: 00:12:43.754 { 00:12:43.754 "code": -32602, 00:12:43.754 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:43.754 }' 00:12:43.754 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:43.755 { 00:12:43.755 "nqn": "nqn.2016-06.io.spdk:cnode5656", 00:12:43.755 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:43.755 "method": "nvmf_create_subsystem", 00:12:43.755 "req_id": 1 00:12:43.755 } 00:12:43.755 Got JSON-RPC error response 00:12:43.755 response: 00:12:43.755 { 00:12:43.755 "code": -32602, 00:12:43.755 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:43.755 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:43.755 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:43.755 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode4585 00:12:43.755 [2024-11-19 13:04:47.117854] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4585: invalid model number 'SPDK_Controller' 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:44.015 { 00:12:44.015 "nqn": "nqn.2016-06.io.spdk:cnode4585", 00:12:44.015 "model_number": "SPDK_Controller\u001f", 00:12:44.015 "method": "nvmf_create_subsystem", 00:12:44.015 "req_id": 1 00:12:44.015 } 00:12:44.015 Got JSON-RPC error response 00:12:44.015 response: 00:12:44.015 { 00:12:44.015 "code": -32602, 00:12:44.015 "message": "Invalid MN SPDK_Controller\u001f" 00:12:44.015 }' 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:44.015 { 00:12:44.015 "nqn": "nqn.2016-06.io.spdk:cnode4585", 00:12:44.015 "model_number": "SPDK_Controller\u001f", 00:12:44.015 "method": "nvmf_create_subsystem", 00:12:44.015 "req_id": 1 00:12:44.015 } 00:12:44.015 Got JSON-RPC error response 00:12:44.015 response: 00:12:44.015 { 00:12:44.015 "code": -32602, 00:12:44.015 "message": "Invalid MN SPDK_Controller\u001f" 00:12:44.015 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:44.015 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ E == \- ]] 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'E>yNDC#K&'\''(KbAs36$j~K' 00:12:44.016 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'E>yNDC#K&'\''(KbAs36$j~K' nqn.2016-06.io.spdk:cnode17974 00:12:44.276 [2024-11-19 13:04:47.463026] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17974: invalid serial number 'E>yNDC#K&'(KbAs36$j~K' 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:44.276 { 00:12:44.276 "nqn": "nqn.2016-06.io.spdk:cnode17974", 00:12:44.276 "serial_number": "E>yNDC#K&'\''(KbAs36$j~K", 00:12:44.276 "method": "nvmf_create_subsystem", 00:12:44.276 "req_id": 1 00:12:44.276 } 00:12:44.276 Got JSON-RPC error response 00:12:44.276 response: 00:12:44.276 { 00:12:44.276 "code": -32602, 00:12:44.276 "message": "Invalid SN E>yNDC#K&'\''(KbAs36$j~K" 00:12:44.276 }' 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:44.276 { 00:12:44.276 "nqn": "nqn.2016-06.io.spdk:cnode17974", 00:12:44.276 "serial_number": "E>yNDC#K&'(KbAs36$j~K", 00:12:44.276 "method": "nvmf_create_subsystem", 00:12:44.276 "req_id": 1 00:12:44.276 } 00:12:44.276 Got JSON-RPC error response 00:12:44.276 response: 00:12:44.276 { 00:12:44.276 "code": -32602, 00:12:44.276 "message": "Invalid SN E>yNDC#K&'(KbAs36$j~K" 00:12:44.276 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.276 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.277 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.278 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:44.278 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:44.537 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:44.537 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.537 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.537 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:44.537 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:44.537 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:44.537 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.537 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.537 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:44.537 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:44.537 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:44.537 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.537 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.537 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:44.537 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:44.537 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:44.537 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.537 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.537 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:44.537 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ t == \- ]] 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 't]8`@r6#;!tf7T=I^aoa'\''4..Yd!YiEYUS_j;WnmNi' 00:12:44.538 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 't]8`@r6#;!tf7T=I^aoa'\''4..Yd!YiEYUS_j;WnmNi' nqn.2016-06.io.spdk:cnode18010 00:12:44.798 [2024-11-19 13:04:47.936576] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18010: invalid model number 't]8`@r6#;!tf7T=I^aoa'4..Yd!YiEYUS_j;WnmNi' 00:12:44.798 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:44.798 { 00:12:44.798 "nqn": "nqn.2016-06.io.spdk:cnode18010", 00:12:44.798 "model_number": "t]8`@r6#;!tf7T=I^aoa'\''4..Yd!YiEYUS_j;WnmNi", 00:12:44.798 "method": "nvmf_create_subsystem", 00:12:44.798 "req_id": 1 00:12:44.798 } 00:12:44.798 Got JSON-RPC error response 00:12:44.798 response: 00:12:44.798 { 00:12:44.798 "code": -32602, 00:12:44.798 "message": "Invalid MN t]8`@r6#;!tf7T=I^aoa'\''4..Yd!YiEYUS_j;WnmNi" 00:12:44.798 }' 00:12:44.798 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:44.798 { 00:12:44.798 "nqn": "nqn.2016-06.io.spdk:cnode18010", 00:12:44.798 "model_number": "t]8`@r6#;!tf7T=I^aoa'4..Yd!YiEYUS_j;WnmNi", 00:12:44.798 "method": "nvmf_create_subsystem", 00:12:44.798 "req_id": 1 00:12:44.798 } 00:12:44.798 Got JSON-RPC error response 00:12:44.798 response: 00:12:44.798 { 00:12:44.798 "code": -32602, 00:12:44.798 "message": "Invalid MN t]8`@r6#;!tf7T=I^aoa'4..Yd!YiEYUS_j;WnmNi" 00:12:44.798 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:44.798 13:04:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:44.798 [2024-11-19 13:04:48.145356] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:45.056 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:45.056 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:45.056 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:45.056 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:45.056 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:45.056 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:45.315 [2024-11-19 13:04:48.546657] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:45.315 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:45.315 { 00:12:45.315 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:45.315 "listen_address": { 00:12:45.315 "trtype": "tcp", 00:12:45.315 "traddr": "", 00:12:45.315 "trsvcid": "4421" 00:12:45.315 }, 00:12:45.315 "method": "nvmf_subsystem_remove_listener", 00:12:45.315 "req_id": 1 00:12:45.315 } 00:12:45.315 Got JSON-RPC error response 00:12:45.315 response: 00:12:45.315 { 00:12:45.315 "code": -32602, 00:12:45.315 "message": "Invalid parameters" 00:12:45.315 }' 00:12:45.315 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:45.315 { 00:12:45.315 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:45.315 "listen_address": { 00:12:45.315 "trtype": "tcp", 00:12:45.315 "traddr": "", 00:12:45.315 "trsvcid": "4421" 00:12:45.315 }, 00:12:45.315 "method": "nvmf_subsystem_remove_listener", 00:12:45.315 "req_id": 1 00:12:45.315 } 00:12:45.315 Got JSON-RPC error response 00:12:45.315 response: 00:12:45.315 { 00:12:45.315 "code": -32602, 00:12:45.315 "message": "Invalid parameters" 00:12:45.315 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:45.315 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20318 -i 0 00:12:45.574 [2024-11-19 13:04:48.743389] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20318: invalid cntlid range [0-65519] 00:12:45.574 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:45.574 { 00:12:45.574 "nqn": "nqn.2016-06.io.spdk:cnode20318", 00:12:45.574 "min_cntlid": 0, 00:12:45.574 "method": "nvmf_create_subsystem", 00:12:45.574 "req_id": 1 00:12:45.574 } 00:12:45.574 Got JSON-RPC error response 00:12:45.574 response: 00:12:45.574 { 00:12:45.574 "code": -32602, 00:12:45.574 "message": "Invalid cntlid range [0-65519]" 00:12:45.574 }' 00:12:45.574 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:45.574 { 00:12:45.574 "nqn": "nqn.2016-06.io.spdk:cnode20318", 00:12:45.574 "min_cntlid": 0, 00:12:45.574 "method": "nvmf_create_subsystem", 00:12:45.574 "req_id": 1 00:12:45.574 } 00:12:45.574 Got JSON-RPC error response 00:12:45.574 response: 00:12:45.574 { 00:12:45.574 "code": -32602, 00:12:45.574 "message": "Invalid cntlid range [0-65519]" 00:12:45.574 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:45.574 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18834 -i 65520 00:12:45.574 [2024-11-19 13:04:48.940055] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18834: invalid cntlid range [65520-65519] 00:12:45.833 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:45.833 { 00:12:45.833 "nqn": "nqn.2016-06.io.spdk:cnode18834", 00:12:45.833 "min_cntlid": 65520, 00:12:45.833 "method": "nvmf_create_subsystem", 00:12:45.833 "req_id": 1 00:12:45.833 } 00:12:45.833 Got JSON-RPC error response 00:12:45.833 response: 00:12:45.833 { 00:12:45.833 "code": -32602, 00:12:45.833 "message": "Invalid cntlid range [65520-65519]" 00:12:45.833 }' 00:12:45.833 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:45.833 { 00:12:45.833 "nqn": "nqn.2016-06.io.spdk:cnode18834", 00:12:45.833 "min_cntlid": 65520, 00:12:45.833 "method": "nvmf_create_subsystem", 00:12:45.833 "req_id": 1 00:12:45.833 } 00:12:45.833 Got JSON-RPC error response 00:12:45.833 response: 00:12:45.833 { 00:12:45.833 "code": -32602, 00:12:45.833 "message": "Invalid cntlid range [65520-65519]" 00:12:45.833 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:45.833 13:04:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18606 -I 0 00:12:45.833 [2024-11-19 13:04:49.156767] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18606: invalid cntlid range [1-0] 00:12:45.833 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:45.833 { 00:12:45.833 "nqn": "nqn.2016-06.io.spdk:cnode18606", 00:12:45.833 "max_cntlid": 0, 00:12:45.833 "method": "nvmf_create_subsystem", 00:12:45.833 "req_id": 1 00:12:45.833 } 00:12:45.833 Got JSON-RPC error response 00:12:45.833 response: 00:12:45.833 { 00:12:45.833 "code": -32602, 00:12:45.833 "message": "Invalid cntlid range [1-0]" 00:12:45.833 }' 00:12:45.833 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:45.833 { 00:12:45.833 "nqn": "nqn.2016-06.io.spdk:cnode18606", 00:12:45.833 "max_cntlid": 0, 00:12:45.833 "method": "nvmf_create_subsystem", 00:12:45.833 "req_id": 1 00:12:45.833 } 00:12:45.833 Got JSON-RPC error response 00:12:45.833 response: 00:12:45.833 { 00:12:45.833 "code": -32602, 00:12:45.833 "message": "Invalid cntlid range [1-0]" 00:12:45.833 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:45.833 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21019 -I 65520 00:12:46.090 [2024-11-19 13:04:49.361440] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21019: invalid cntlid range [1-65520] 00:12:46.090 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:46.090 { 00:12:46.090 "nqn": "nqn.2016-06.io.spdk:cnode21019", 00:12:46.090 "max_cntlid": 65520, 00:12:46.090 "method": "nvmf_create_subsystem", 00:12:46.090 "req_id": 1 00:12:46.090 } 00:12:46.090 Got JSON-RPC error response 00:12:46.090 response: 00:12:46.090 { 00:12:46.090 "code": -32602, 00:12:46.090 "message": "Invalid cntlid range [1-65520]" 00:12:46.090 }' 00:12:46.090 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:46.090 { 00:12:46.090 "nqn": "nqn.2016-06.io.spdk:cnode21019", 00:12:46.091 "max_cntlid": 65520, 00:12:46.091 "method": "nvmf_create_subsystem", 00:12:46.091 "req_id": 1 00:12:46.091 } 00:12:46.091 Got JSON-RPC error response 00:12:46.091 response: 00:12:46.091 { 00:12:46.091 "code": -32602, 00:12:46.091 "message": "Invalid cntlid range [1-65520]" 00:12:46.091 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:46.091 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28461 -i 6 -I 5 00:12:46.349 [2024-11-19 13:04:49.566193] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28461: invalid cntlid range [6-5] 00:12:46.349 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:46.349 { 00:12:46.349 "nqn": "nqn.2016-06.io.spdk:cnode28461", 00:12:46.349 "min_cntlid": 6, 00:12:46.349 "max_cntlid": 5, 00:12:46.349 "method": "nvmf_create_subsystem", 00:12:46.349 "req_id": 1 00:12:46.349 } 00:12:46.349 Got JSON-RPC error response 00:12:46.349 response: 00:12:46.349 { 00:12:46.349 "code": -32602, 00:12:46.349 "message": "Invalid cntlid range [6-5]" 00:12:46.349 }' 00:12:46.349 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:46.349 { 00:12:46.349 "nqn": "nqn.2016-06.io.spdk:cnode28461", 00:12:46.349 "min_cntlid": 6, 00:12:46.349 "max_cntlid": 5, 00:12:46.349 "method": "nvmf_create_subsystem", 00:12:46.349 "req_id": 1 00:12:46.349 } 00:12:46.349 Got JSON-RPC error response 00:12:46.349 response: 00:12:46.349 { 00:12:46.349 "code": -32602, 00:12:46.349 "message": "Invalid cntlid range [6-5]" 00:12:46.349 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:46.349 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:46.349 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:46.349 { 00:12:46.349 "name": "foobar", 00:12:46.349 "method": "nvmf_delete_target", 00:12:46.349 "req_id": 1 00:12:46.349 } 00:12:46.349 Got JSON-RPC error response 00:12:46.349 response: 00:12:46.349 { 00:12:46.349 "code": -32602, 00:12:46.349 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:46.349 }' 00:12:46.349 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:46.349 { 00:12:46.349 "name": "foobar", 00:12:46.349 "method": "nvmf_delete_target", 00:12:46.349 "req_id": 1 00:12:46.349 } 00:12:46.349 Got JSON-RPC error response 00:12:46.349 response: 00:12:46.349 { 00:12:46.349 "code": -32602, 00:12:46.349 "message": "The specified target doesn't exist, cannot delete it." 00:12:46.349 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:46.349 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:46.349 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:46.349 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:46.349 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:46.349 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:46.349 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:46.349 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:46.349 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:46.349 rmmod nvme_tcp 00:12:46.609 rmmod nvme_fabrics 00:12:46.609 rmmod nvme_keyring 00:12:46.609 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:46.609 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:46.609 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:46.609 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2783979 ']' 00:12:46.609 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2783979 00:12:46.609 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2783979 ']' 00:12:46.609 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2783979 00:12:46.609 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:46.609 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.609 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2783979 00:12:46.609 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:46.609 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:46.609 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2783979' 00:12:46.609 killing process with pid 2783979 00:12:46.609 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2783979 00:12:46.609 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2783979 00:12:46.868 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:46.868 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:46.868 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:46.868 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:46.868 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:46.868 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:46.868 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:46.868 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:46.868 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:46.868 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.868 13:04:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.868 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.774 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:48.774 00:12:48.774 real 0m11.997s 00:12:48.774 user 0m18.580s 00:12:48.774 sys 0m5.407s 00:12:48.774 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.774 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:48.774 ************************************ 00:12:48.774 END TEST nvmf_invalid 00:12:48.774 ************************************ 00:12:48.774 13:04:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:48.774 13:04:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:48.774 13:04:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.774 13:04:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:48.774 ************************************ 00:12:48.774 START TEST nvmf_connect_stress 00:12:48.774 ************************************ 00:12:48.774 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:49.034 * Looking for test storage... 00:12:49.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:49.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.034 --rc genhtml_branch_coverage=1 00:12:49.034 --rc genhtml_function_coverage=1 00:12:49.034 --rc genhtml_legend=1 00:12:49.034 --rc geninfo_all_blocks=1 00:12:49.034 --rc geninfo_unexecuted_blocks=1 00:12:49.034 00:12:49.034 ' 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:49.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.034 --rc genhtml_branch_coverage=1 00:12:49.034 --rc genhtml_function_coverage=1 00:12:49.034 --rc genhtml_legend=1 00:12:49.034 --rc geninfo_all_blocks=1 00:12:49.034 --rc geninfo_unexecuted_blocks=1 00:12:49.034 00:12:49.034 ' 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:49.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.034 --rc genhtml_branch_coverage=1 00:12:49.034 --rc genhtml_function_coverage=1 00:12:49.034 --rc genhtml_legend=1 00:12:49.034 --rc geninfo_all_blocks=1 00:12:49.034 --rc geninfo_unexecuted_blocks=1 00:12:49.034 00:12:49.034 ' 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:49.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.034 --rc genhtml_branch_coverage=1 00:12:49.034 --rc genhtml_function_coverage=1 00:12:49.034 --rc genhtml_legend=1 00:12:49.034 --rc geninfo_all_blocks=1 00:12:49.034 --rc geninfo_unexecuted_blocks=1 00:12:49.034 00:12:49.034 ' 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.034 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:49.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:49.035 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.609 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:55.609 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:55.609 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:55.609 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:55.609 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:55.609 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:55.609 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:55.609 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:55.609 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:55.609 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:55.609 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:55.609 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:55.609 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:55.609 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:55.609 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:55.609 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:55.609 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:55.609 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:55.609 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:55.609 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:55.609 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:55.609 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:55.609 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:55.609 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:55.610 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:55.610 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:55.610 Found net devices under 0000:86:00.0: cvl_0_0 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:55.610 Found net devices under 0000:86:00.1: cvl_0_1 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:55.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:55.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:12:55.610 00:12:55.610 --- 10.0.0.2 ping statistics --- 00:12:55.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.610 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:55.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:55.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:12:55.610 00:12:55.610 --- 10.0.0.1 ping statistics --- 00:12:55.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.610 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:55.610 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2788294 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2788294 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2788294 ']' 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.611 [2024-11-19 13:04:58.364237] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:12:55.611 [2024-11-19 13:04:58.364287] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.611 [2024-11-19 13:04:58.443420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:55.611 [2024-11-19 13:04:58.485056] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.611 [2024-11-19 13:04:58.485093] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.611 [2024-11-19 13:04:58.485100] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.611 [2024-11-19 13:04:58.485107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.611 [2024-11-19 13:04:58.485112] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.611 [2024-11-19 13:04:58.486561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.611 [2024-11-19 13:04:58.486671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.611 [2024-11-19 13:04:58.486673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.611 [2024-11-19 13:04:58.625623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.611 [2024-11-19 13:04:58.645831] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.611 NULL1 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2788316 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.611 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.612 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.612 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.612 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.612 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.612 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.612 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.612 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.612 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.612 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.612 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.612 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.612 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.612 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.612 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.612 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.612 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.612 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.612 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.612 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:55.612 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:55.612 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:12:55.612 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.612 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.612 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.871 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.871 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:12:55.871 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.871 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.871 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.128 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.128 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:12:56.128 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.128 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.128 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.387 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.387 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:12:56.387 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.387 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.387 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.977 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.977 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:12:56.977 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.977 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.977 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.318 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.318 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:12:57.318 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.318 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.318 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.612 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.612 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:12:57.612 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.612 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.612 13:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.892 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.892 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:12:57.892 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.892 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.892 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.150 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.150 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:12:58.150 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.150 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.150 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.408 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.408 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:12:58.408 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.408 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.408 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.666 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.666 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:12:58.666 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.666 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.666 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.233 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.233 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:12:59.233 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.233 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.233 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.492 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.492 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:12:59.492 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.492 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.492 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.750 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.750 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:12:59.750 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.750 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.750 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.008 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.008 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:13:00.008 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.008 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.008 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.267 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.267 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:13:00.267 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.267 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.267 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.833 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.833 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:13:00.833 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.833 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.833 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.092 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.092 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:13:01.092 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.092 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.092 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.350 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.350 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:13:01.350 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.350 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.350 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.609 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.609 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:13:01.609 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.609 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.609 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.176 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.176 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:13:02.176 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.176 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.176 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.435 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.435 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:13:02.435 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.435 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.435 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.693 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.693 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:13:02.693 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.693 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.693 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.951 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.951 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:13:02.951 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.951 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.951 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.210 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.210 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:13:03.210 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.210 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.210 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.777 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.778 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:13:03.778 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.778 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.778 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.036 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.036 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:13:04.036 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.036 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.036 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.295 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.295 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:13:04.295 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.295 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.295 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.552 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.552 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:13:04.552 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.552 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.552 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.812 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.812 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:13:04.812 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.812 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.812 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.378 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.378 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:13:05.378 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.378 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.378 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.636 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.636 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:13:05.636 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.636 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.636 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.636 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:05.895 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.895 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2788316 00:13:05.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2788316) - No such process 00:13:05.895 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2788316 00:13:05.895 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:05.895 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:05.895 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:05.895 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:05.895 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:05.895 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:05.895 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:05.895 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:05.895 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:05.895 rmmod nvme_tcp 00:13:05.895 rmmod nvme_fabrics 00:13:05.895 rmmod nvme_keyring 00:13:05.895 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:05.895 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:05.895 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:05.895 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2788294 ']' 00:13:05.895 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2788294 00:13:05.895 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2788294 ']' 00:13:05.895 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2788294 00:13:05.895 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:05.895 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:05.896 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2788294 00:13:06.154 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:06.154 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:06.154 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2788294' 00:13:06.154 killing process with pid 2788294 00:13:06.154 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2788294 00:13:06.154 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2788294 00:13:06.154 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:06.154 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:06.154 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:06.154 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:06.154 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:06.154 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:06.154 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:06.154 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:06.154 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:06.154 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.154 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:06.154 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:08.686 00:13:08.686 real 0m19.368s 00:13:08.686 user 0m40.562s 00:13:08.686 sys 0m8.544s 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.686 ************************************ 00:13:08.686 END TEST nvmf_connect_stress 00:13:08.686 ************************************ 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:08.686 ************************************ 00:13:08.686 START TEST nvmf_fused_ordering 00:13:08.686 ************************************ 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:08.686 * Looking for test storage... 00:13:08.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:08.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.686 --rc genhtml_branch_coverage=1 00:13:08.686 --rc genhtml_function_coverage=1 00:13:08.686 --rc genhtml_legend=1 00:13:08.686 --rc geninfo_all_blocks=1 00:13:08.686 --rc geninfo_unexecuted_blocks=1 00:13:08.686 00:13:08.686 ' 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:08.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.686 --rc genhtml_branch_coverage=1 00:13:08.686 --rc genhtml_function_coverage=1 00:13:08.686 --rc genhtml_legend=1 00:13:08.686 --rc geninfo_all_blocks=1 00:13:08.686 --rc geninfo_unexecuted_blocks=1 00:13:08.686 00:13:08.686 ' 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:08.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.686 --rc genhtml_branch_coverage=1 00:13:08.686 --rc genhtml_function_coverage=1 00:13:08.686 --rc genhtml_legend=1 00:13:08.686 --rc geninfo_all_blocks=1 00:13:08.686 --rc geninfo_unexecuted_blocks=1 00:13:08.686 00:13:08.686 ' 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:08.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.686 --rc genhtml_branch_coverage=1 00:13:08.686 --rc genhtml_function_coverage=1 00:13:08.686 --rc genhtml_legend=1 00:13:08.686 --rc geninfo_all_blocks=1 00:13:08.686 --rc geninfo_unexecuted_blocks=1 00:13:08.686 00:13:08.686 ' 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:08.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.686 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:08.687 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:08.687 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:08.687 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.687 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:08.687 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.687 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:08.687 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:08.687 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:08.687 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:15.252 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:15.252 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:15.252 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:15.253 Found net devices under 0000:86:00.0: cvl_0_0 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:15.253 Found net devices under 0000:86:00.1: cvl_0_1 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:15.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:13:15.253 00:13:15.253 --- 10.0.0.2 ping statistics --- 00:13:15.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.253 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:13:15.253 00:13:15.253 --- 10.0.0.1 ping statistics --- 00:13:15.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.253 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2793694 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2793694 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2793694 ']' 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.253 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:15.253 [2024-11-19 13:05:17.830966] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:13:15.253 [2024-11-19 13:05:17.831027] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.253 [2024-11-19 13:05:17.909531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.253 [2024-11-19 13:05:17.951068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.253 [2024-11-19 13:05:17.951104] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.253 [2024-11-19 13:05:17.951111] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.253 [2024-11-19 13:05:17.951117] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.253 [2024-11-19 13:05:17.951123] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.253 [2024-11-19 13:05:17.951648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.253 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.253 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:15.253 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:15.253 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:15.253 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:15.253 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.253 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:15.253 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.253 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:15.253 [2024-11-19 13:05:18.086773] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.253 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.254 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:15.254 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.254 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:15.254 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.254 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.254 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.254 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:15.254 [2024-11-19 13:05:18.110974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.254 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.254 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:15.254 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.254 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:15.254 NULL1 00:13:15.254 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.254 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:15.254 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.254 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:15.254 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.254 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:15.254 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.254 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:15.254 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.254 13:05:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:15.254 [2024-11-19 13:05:18.168762] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:13:15.254 [2024-11-19 13:05:18.168794] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2793714 ] 00:13:15.254 Attached to nqn.2016-06.io.spdk:cnode1 00:13:15.254 Namespace ID: 1 size: 1GB 00:13:15.254 fused_ordering(0) 00:13:15.254 fused_ordering(1) 00:13:15.254 fused_ordering(2) 00:13:15.254 fused_ordering(3) 00:13:15.254 fused_ordering(4) 00:13:15.254 fused_ordering(5) 00:13:15.254 fused_ordering(6) 00:13:15.254 fused_ordering(7) 00:13:15.254 fused_ordering(8) 00:13:15.254 fused_ordering(9) 00:13:15.254 fused_ordering(10) 00:13:15.254 fused_ordering(11) 00:13:15.254 fused_ordering(12) 00:13:15.254 fused_ordering(13) 00:13:15.254 fused_ordering(14) 00:13:15.254 fused_ordering(15) 00:13:15.254 fused_ordering(16) 00:13:15.254 fused_ordering(17) 00:13:15.254 fused_ordering(18) 00:13:15.254 fused_ordering(19) 00:13:15.254 fused_ordering(20) 00:13:15.254 fused_ordering(21) 00:13:15.254 fused_ordering(22) 00:13:15.254 fused_ordering(23) 00:13:15.254 fused_ordering(24) 00:13:15.254 fused_ordering(25) 00:13:15.254 fused_ordering(26) 00:13:15.254 fused_ordering(27) 00:13:15.254 fused_ordering(28) 00:13:15.254 fused_ordering(29) 00:13:15.254 fused_ordering(30) 00:13:15.254 fused_ordering(31) 00:13:15.254 fused_ordering(32) 00:13:15.254 fused_ordering(33) 00:13:15.254 fused_ordering(34) 00:13:15.254 fused_ordering(35) 00:13:15.254 fused_ordering(36) 00:13:15.254 fused_ordering(37) 00:13:15.254 fused_ordering(38) 00:13:15.254 fused_ordering(39) 00:13:15.254 fused_ordering(40) 00:13:15.254 fused_ordering(41) 00:13:15.254 fused_ordering(42) 00:13:15.254 fused_ordering(43) 00:13:15.254 fused_ordering(44) 00:13:15.254 fused_ordering(45) 00:13:15.254 fused_ordering(46) 00:13:15.254 fused_ordering(47) 00:13:15.254 fused_ordering(48) 00:13:15.254 fused_ordering(49) 00:13:15.254 fused_ordering(50) 00:13:15.254 fused_ordering(51) 00:13:15.254 fused_ordering(52) 00:13:15.254 fused_ordering(53) 00:13:15.254 fused_ordering(54) 00:13:15.254 fused_ordering(55) 00:13:15.254 fused_ordering(56) 00:13:15.254 fused_ordering(57) 00:13:15.254 fused_ordering(58) 00:13:15.254 fused_ordering(59) 00:13:15.254 fused_ordering(60) 00:13:15.254 fused_ordering(61) 00:13:15.254 fused_ordering(62) 00:13:15.254 fused_ordering(63) 00:13:15.254 fused_ordering(64) 00:13:15.254 fused_ordering(65) 00:13:15.254 fused_ordering(66) 00:13:15.254 fused_ordering(67) 00:13:15.254 fused_ordering(68) 00:13:15.254 fused_ordering(69) 00:13:15.254 fused_ordering(70) 00:13:15.254 fused_ordering(71) 00:13:15.254 fused_ordering(72) 00:13:15.254 fused_ordering(73) 00:13:15.254 fused_ordering(74) 00:13:15.254 fused_ordering(75) 00:13:15.254 fused_ordering(76) 00:13:15.254 fused_ordering(77) 00:13:15.254 fused_ordering(78) 00:13:15.254 fused_ordering(79) 00:13:15.254 fused_ordering(80) 00:13:15.254 fused_ordering(81) 00:13:15.254 fused_ordering(82) 00:13:15.254 fused_ordering(83) 00:13:15.254 fused_ordering(84) 00:13:15.254 fused_ordering(85) 00:13:15.254 fused_ordering(86) 00:13:15.254 fused_ordering(87) 00:13:15.254 fused_ordering(88) 00:13:15.254 fused_ordering(89) 00:13:15.254 fused_ordering(90) 00:13:15.254 fused_ordering(91) 00:13:15.254 fused_ordering(92) 00:13:15.254 fused_ordering(93) 00:13:15.254 fused_ordering(94) 00:13:15.254 fused_ordering(95) 00:13:15.254 fused_ordering(96) 00:13:15.254 fused_ordering(97) 00:13:15.254 fused_ordering(98) 00:13:15.254 fused_ordering(99) 00:13:15.254 fused_ordering(100) 00:13:15.254 fused_ordering(101) 00:13:15.254 fused_ordering(102) 00:13:15.254 fused_ordering(103) 00:13:15.254 fused_ordering(104) 00:13:15.254 fused_ordering(105) 00:13:15.254 fused_ordering(106) 00:13:15.254 fused_ordering(107) 00:13:15.254 fused_ordering(108) 00:13:15.254 fused_ordering(109) 00:13:15.254 fused_ordering(110) 00:13:15.254 fused_ordering(111) 00:13:15.254 fused_ordering(112) 00:13:15.254 fused_ordering(113) 00:13:15.254 fused_ordering(114) 00:13:15.254 fused_ordering(115) 00:13:15.254 fused_ordering(116) 00:13:15.254 fused_ordering(117) 00:13:15.254 fused_ordering(118) 00:13:15.254 fused_ordering(119) 00:13:15.254 fused_ordering(120) 00:13:15.254 fused_ordering(121) 00:13:15.254 fused_ordering(122) 00:13:15.254 fused_ordering(123) 00:13:15.254 fused_ordering(124) 00:13:15.254 fused_ordering(125) 00:13:15.254 fused_ordering(126) 00:13:15.254 fused_ordering(127) 00:13:15.254 fused_ordering(128) 00:13:15.254 fused_ordering(129) 00:13:15.254 fused_ordering(130) 00:13:15.254 fused_ordering(131) 00:13:15.254 fused_ordering(132) 00:13:15.254 fused_ordering(133) 00:13:15.254 fused_ordering(134) 00:13:15.254 fused_ordering(135) 00:13:15.254 fused_ordering(136) 00:13:15.254 fused_ordering(137) 00:13:15.254 fused_ordering(138) 00:13:15.254 fused_ordering(139) 00:13:15.254 fused_ordering(140) 00:13:15.254 fused_ordering(141) 00:13:15.254 fused_ordering(142) 00:13:15.254 fused_ordering(143) 00:13:15.254 fused_ordering(144) 00:13:15.254 fused_ordering(145) 00:13:15.254 fused_ordering(146) 00:13:15.254 fused_ordering(147) 00:13:15.254 fused_ordering(148) 00:13:15.254 fused_ordering(149) 00:13:15.254 fused_ordering(150) 00:13:15.254 fused_ordering(151) 00:13:15.254 fused_ordering(152) 00:13:15.254 fused_ordering(153) 00:13:15.254 fused_ordering(154) 00:13:15.254 fused_ordering(155) 00:13:15.254 fused_ordering(156) 00:13:15.254 fused_ordering(157) 00:13:15.254 fused_ordering(158) 00:13:15.254 fused_ordering(159) 00:13:15.254 fused_ordering(160) 00:13:15.254 fused_ordering(161) 00:13:15.254 fused_ordering(162) 00:13:15.254 fused_ordering(163) 00:13:15.254 fused_ordering(164) 00:13:15.254 fused_ordering(165) 00:13:15.254 fused_ordering(166) 00:13:15.254 fused_ordering(167) 00:13:15.254 fused_ordering(168) 00:13:15.254 fused_ordering(169) 00:13:15.254 fused_ordering(170) 00:13:15.254 fused_ordering(171) 00:13:15.254 fused_ordering(172) 00:13:15.254 fused_ordering(173) 00:13:15.254 fused_ordering(174) 00:13:15.254 fused_ordering(175) 00:13:15.254 fused_ordering(176) 00:13:15.254 fused_ordering(177) 00:13:15.254 fused_ordering(178) 00:13:15.254 fused_ordering(179) 00:13:15.254 fused_ordering(180) 00:13:15.254 fused_ordering(181) 00:13:15.254 fused_ordering(182) 00:13:15.254 fused_ordering(183) 00:13:15.254 fused_ordering(184) 00:13:15.254 fused_ordering(185) 00:13:15.254 fused_ordering(186) 00:13:15.254 fused_ordering(187) 00:13:15.254 fused_ordering(188) 00:13:15.254 fused_ordering(189) 00:13:15.254 fused_ordering(190) 00:13:15.254 fused_ordering(191) 00:13:15.254 fused_ordering(192) 00:13:15.254 fused_ordering(193) 00:13:15.254 fused_ordering(194) 00:13:15.254 fused_ordering(195) 00:13:15.254 fused_ordering(196) 00:13:15.254 fused_ordering(197) 00:13:15.254 fused_ordering(198) 00:13:15.254 fused_ordering(199) 00:13:15.254 fused_ordering(200) 00:13:15.254 fused_ordering(201) 00:13:15.255 fused_ordering(202) 00:13:15.255 fused_ordering(203) 00:13:15.255 fused_ordering(204) 00:13:15.255 fused_ordering(205) 00:13:15.513 fused_ordering(206) 00:13:15.513 fused_ordering(207) 00:13:15.513 fused_ordering(208) 00:13:15.513 fused_ordering(209) 00:13:15.513 fused_ordering(210) 00:13:15.513 fused_ordering(211) 00:13:15.513 fused_ordering(212) 00:13:15.513 fused_ordering(213) 00:13:15.513 fused_ordering(214) 00:13:15.513 fused_ordering(215) 00:13:15.513 fused_ordering(216) 00:13:15.513 fused_ordering(217) 00:13:15.513 fused_ordering(218) 00:13:15.513 fused_ordering(219) 00:13:15.513 fused_ordering(220) 00:13:15.513 fused_ordering(221) 00:13:15.513 fused_ordering(222) 00:13:15.513 fused_ordering(223) 00:13:15.513 fused_ordering(224) 00:13:15.513 fused_ordering(225) 00:13:15.513 fused_ordering(226) 00:13:15.513 fused_ordering(227) 00:13:15.513 fused_ordering(228) 00:13:15.513 fused_ordering(229) 00:13:15.513 fused_ordering(230) 00:13:15.513 fused_ordering(231) 00:13:15.513 fused_ordering(232) 00:13:15.513 fused_ordering(233) 00:13:15.513 fused_ordering(234) 00:13:15.513 fused_ordering(235) 00:13:15.513 fused_ordering(236) 00:13:15.513 fused_ordering(237) 00:13:15.513 fused_ordering(238) 00:13:15.513 fused_ordering(239) 00:13:15.513 fused_ordering(240) 00:13:15.513 fused_ordering(241) 00:13:15.513 fused_ordering(242) 00:13:15.513 fused_ordering(243) 00:13:15.513 fused_ordering(244) 00:13:15.513 fused_ordering(245) 00:13:15.513 fused_ordering(246) 00:13:15.513 fused_ordering(247) 00:13:15.513 fused_ordering(248) 00:13:15.513 fused_ordering(249) 00:13:15.513 fused_ordering(250) 00:13:15.513 fused_ordering(251) 00:13:15.513 fused_ordering(252) 00:13:15.513 fused_ordering(253) 00:13:15.513 fused_ordering(254) 00:13:15.513 fused_ordering(255) 00:13:15.513 fused_ordering(256) 00:13:15.513 fused_ordering(257) 00:13:15.513 fused_ordering(258) 00:13:15.513 fused_ordering(259) 00:13:15.513 fused_ordering(260) 00:13:15.513 fused_ordering(261) 00:13:15.513 fused_ordering(262) 00:13:15.513 fused_ordering(263) 00:13:15.513 fused_ordering(264) 00:13:15.513 fused_ordering(265) 00:13:15.513 fused_ordering(266) 00:13:15.513 fused_ordering(267) 00:13:15.513 fused_ordering(268) 00:13:15.513 fused_ordering(269) 00:13:15.513 fused_ordering(270) 00:13:15.513 fused_ordering(271) 00:13:15.513 fused_ordering(272) 00:13:15.513 fused_ordering(273) 00:13:15.513 fused_ordering(274) 00:13:15.513 fused_ordering(275) 00:13:15.513 fused_ordering(276) 00:13:15.513 fused_ordering(277) 00:13:15.513 fused_ordering(278) 00:13:15.513 fused_ordering(279) 00:13:15.513 fused_ordering(280) 00:13:15.513 fused_ordering(281) 00:13:15.513 fused_ordering(282) 00:13:15.513 fused_ordering(283) 00:13:15.513 fused_ordering(284) 00:13:15.513 fused_ordering(285) 00:13:15.513 fused_ordering(286) 00:13:15.513 fused_ordering(287) 00:13:15.513 fused_ordering(288) 00:13:15.513 fused_ordering(289) 00:13:15.513 fused_ordering(290) 00:13:15.513 fused_ordering(291) 00:13:15.513 fused_ordering(292) 00:13:15.513 fused_ordering(293) 00:13:15.513 fused_ordering(294) 00:13:15.513 fused_ordering(295) 00:13:15.513 fused_ordering(296) 00:13:15.513 fused_ordering(297) 00:13:15.513 fused_ordering(298) 00:13:15.513 fused_ordering(299) 00:13:15.513 fused_ordering(300) 00:13:15.513 fused_ordering(301) 00:13:15.513 fused_ordering(302) 00:13:15.513 fused_ordering(303) 00:13:15.513 fused_ordering(304) 00:13:15.513 fused_ordering(305) 00:13:15.513 fused_ordering(306) 00:13:15.513 fused_ordering(307) 00:13:15.513 fused_ordering(308) 00:13:15.513 fused_ordering(309) 00:13:15.513 fused_ordering(310) 00:13:15.513 fused_ordering(311) 00:13:15.513 fused_ordering(312) 00:13:15.513 fused_ordering(313) 00:13:15.513 fused_ordering(314) 00:13:15.513 fused_ordering(315) 00:13:15.513 fused_ordering(316) 00:13:15.513 fused_ordering(317) 00:13:15.513 fused_ordering(318) 00:13:15.513 fused_ordering(319) 00:13:15.513 fused_ordering(320) 00:13:15.513 fused_ordering(321) 00:13:15.513 fused_ordering(322) 00:13:15.513 fused_ordering(323) 00:13:15.513 fused_ordering(324) 00:13:15.513 fused_ordering(325) 00:13:15.513 fused_ordering(326) 00:13:15.513 fused_ordering(327) 00:13:15.513 fused_ordering(328) 00:13:15.513 fused_ordering(329) 00:13:15.513 fused_ordering(330) 00:13:15.513 fused_ordering(331) 00:13:15.513 fused_ordering(332) 00:13:15.513 fused_ordering(333) 00:13:15.513 fused_ordering(334) 00:13:15.513 fused_ordering(335) 00:13:15.513 fused_ordering(336) 00:13:15.513 fused_ordering(337) 00:13:15.513 fused_ordering(338) 00:13:15.513 fused_ordering(339) 00:13:15.513 fused_ordering(340) 00:13:15.513 fused_ordering(341) 00:13:15.513 fused_ordering(342) 00:13:15.513 fused_ordering(343) 00:13:15.513 fused_ordering(344) 00:13:15.513 fused_ordering(345) 00:13:15.513 fused_ordering(346) 00:13:15.513 fused_ordering(347) 00:13:15.513 fused_ordering(348) 00:13:15.513 fused_ordering(349) 00:13:15.513 fused_ordering(350) 00:13:15.513 fused_ordering(351) 00:13:15.513 fused_ordering(352) 00:13:15.513 fused_ordering(353) 00:13:15.513 fused_ordering(354) 00:13:15.513 fused_ordering(355) 00:13:15.513 fused_ordering(356) 00:13:15.513 fused_ordering(357) 00:13:15.513 fused_ordering(358) 00:13:15.513 fused_ordering(359) 00:13:15.513 fused_ordering(360) 00:13:15.513 fused_ordering(361) 00:13:15.513 fused_ordering(362) 00:13:15.513 fused_ordering(363) 00:13:15.513 fused_ordering(364) 00:13:15.513 fused_ordering(365) 00:13:15.513 fused_ordering(366) 00:13:15.513 fused_ordering(367) 00:13:15.513 fused_ordering(368) 00:13:15.513 fused_ordering(369) 00:13:15.513 fused_ordering(370) 00:13:15.513 fused_ordering(371) 00:13:15.513 fused_ordering(372) 00:13:15.513 fused_ordering(373) 00:13:15.513 fused_ordering(374) 00:13:15.513 fused_ordering(375) 00:13:15.513 fused_ordering(376) 00:13:15.513 fused_ordering(377) 00:13:15.513 fused_ordering(378) 00:13:15.513 fused_ordering(379) 00:13:15.513 fused_ordering(380) 00:13:15.513 fused_ordering(381) 00:13:15.513 fused_ordering(382) 00:13:15.513 fused_ordering(383) 00:13:15.514 fused_ordering(384) 00:13:15.514 fused_ordering(385) 00:13:15.514 fused_ordering(386) 00:13:15.514 fused_ordering(387) 00:13:15.514 fused_ordering(388) 00:13:15.514 fused_ordering(389) 00:13:15.514 fused_ordering(390) 00:13:15.514 fused_ordering(391) 00:13:15.514 fused_ordering(392) 00:13:15.514 fused_ordering(393) 00:13:15.514 fused_ordering(394) 00:13:15.514 fused_ordering(395) 00:13:15.514 fused_ordering(396) 00:13:15.514 fused_ordering(397) 00:13:15.514 fused_ordering(398) 00:13:15.514 fused_ordering(399) 00:13:15.514 fused_ordering(400) 00:13:15.514 fused_ordering(401) 00:13:15.514 fused_ordering(402) 00:13:15.514 fused_ordering(403) 00:13:15.514 fused_ordering(404) 00:13:15.514 fused_ordering(405) 00:13:15.514 fused_ordering(406) 00:13:15.514 fused_ordering(407) 00:13:15.514 fused_ordering(408) 00:13:15.514 fused_ordering(409) 00:13:15.514 fused_ordering(410) 00:13:15.771 fused_ordering(411) 00:13:15.771 fused_ordering(412) 00:13:15.771 fused_ordering(413) 00:13:15.771 fused_ordering(414) 00:13:15.772 fused_ordering(415) 00:13:15.772 fused_ordering(416) 00:13:15.772 fused_ordering(417) 00:13:15.772 fused_ordering(418) 00:13:15.772 fused_ordering(419) 00:13:15.772 fused_ordering(420) 00:13:15.772 fused_ordering(421) 00:13:15.772 fused_ordering(422) 00:13:15.772 fused_ordering(423) 00:13:15.772 fused_ordering(424) 00:13:15.772 fused_ordering(425) 00:13:15.772 fused_ordering(426) 00:13:15.772 fused_ordering(427) 00:13:15.772 fused_ordering(428) 00:13:15.772 fused_ordering(429) 00:13:15.772 fused_ordering(430) 00:13:15.772 fused_ordering(431) 00:13:15.772 fused_ordering(432) 00:13:15.772 fused_ordering(433) 00:13:15.772 fused_ordering(434) 00:13:15.772 fused_ordering(435) 00:13:15.772 fused_ordering(436) 00:13:15.772 fused_ordering(437) 00:13:15.772 fused_ordering(438) 00:13:15.772 fused_ordering(439) 00:13:15.772 fused_ordering(440) 00:13:15.772 fused_ordering(441) 00:13:15.772 fused_ordering(442) 00:13:15.772 fused_ordering(443) 00:13:15.772 fused_ordering(444) 00:13:15.772 fused_ordering(445) 00:13:15.772 fused_ordering(446) 00:13:15.772 fused_ordering(447) 00:13:15.772 fused_ordering(448) 00:13:15.772 fused_ordering(449) 00:13:15.772 fused_ordering(450) 00:13:15.772 fused_ordering(451) 00:13:15.772 fused_ordering(452) 00:13:15.772 fused_ordering(453) 00:13:15.772 fused_ordering(454) 00:13:15.772 fused_ordering(455) 00:13:15.772 fused_ordering(456) 00:13:15.772 fused_ordering(457) 00:13:15.772 fused_ordering(458) 00:13:15.772 fused_ordering(459) 00:13:15.772 fused_ordering(460) 00:13:15.772 fused_ordering(461) 00:13:15.772 fused_ordering(462) 00:13:15.772 fused_ordering(463) 00:13:15.772 fused_ordering(464) 00:13:15.772 fused_ordering(465) 00:13:15.772 fused_ordering(466) 00:13:15.772 fused_ordering(467) 00:13:15.772 fused_ordering(468) 00:13:15.772 fused_ordering(469) 00:13:15.772 fused_ordering(470) 00:13:15.772 fused_ordering(471) 00:13:15.772 fused_ordering(472) 00:13:15.772 fused_ordering(473) 00:13:15.772 fused_ordering(474) 00:13:15.772 fused_ordering(475) 00:13:15.772 fused_ordering(476) 00:13:15.772 fused_ordering(477) 00:13:15.772 fused_ordering(478) 00:13:15.772 fused_ordering(479) 00:13:15.772 fused_ordering(480) 00:13:15.772 fused_ordering(481) 00:13:15.772 fused_ordering(482) 00:13:15.772 fused_ordering(483) 00:13:15.772 fused_ordering(484) 00:13:15.772 fused_ordering(485) 00:13:15.772 fused_ordering(486) 00:13:15.772 fused_ordering(487) 00:13:15.772 fused_ordering(488) 00:13:15.772 fused_ordering(489) 00:13:15.772 fused_ordering(490) 00:13:15.772 fused_ordering(491) 00:13:15.772 fused_ordering(492) 00:13:15.772 fused_ordering(493) 00:13:15.772 fused_ordering(494) 00:13:15.772 fused_ordering(495) 00:13:15.772 fused_ordering(496) 00:13:15.772 fused_ordering(497) 00:13:15.772 fused_ordering(498) 00:13:15.772 fused_ordering(499) 00:13:15.772 fused_ordering(500) 00:13:15.772 fused_ordering(501) 00:13:15.772 fused_ordering(502) 00:13:15.772 fused_ordering(503) 00:13:15.772 fused_ordering(504) 00:13:15.772 fused_ordering(505) 00:13:15.772 fused_ordering(506) 00:13:15.772 fused_ordering(507) 00:13:15.772 fused_ordering(508) 00:13:15.772 fused_ordering(509) 00:13:15.772 fused_ordering(510) 00:13:15.772 fused_ordering(511) 00:13:15.772 fused_ordering(512) 00:13:15.772 fused_ordering(513) 00:13:15.772 fused_ordering(514) 00:13:15.772 fused_ordering(515) 00:13:15.772 fused_ordering(516) 00:13:15.772 fused_ordering(517) 00:13:15.772 fused_ordering(518) 00:13:15.772 fused_ordering(519) 00:13:15.772 fused_ordering(520) 00:13:15.772 fused_ordering(521) 00:13:15.772 fused_ordering(522) 00:13:15.772 fused_ordering(523) 00:13:15.772 fused_ordering(524) 00:13:15.772 fused_ordering(525) 00:13:15.772 fused_ordering(526) 00:13:15.772 fused_ordering(527) 00:13:15.772 fused_ordering(528) 00:13:15.772 fused_ordering(529) 00:13:15.772 fused_ordering(530) 00:13:15.772 fused_ordering(531) 00:13:15.772 fused_ordering(532) 00:13:15.772 fused_ordering(533) 00:13:15.772 fused_ordering(534) 00:13:15.772 fused_ordering(535) 00:13:15.772 fused_ordering(536) 00:13:15.772 fused_ordering(537) 00:13:15.772 fused_ordering(538) 00:13:15.772 fused_ordering(539) 00:13:15.772 fused_ordering(540) 00:13:15.772 fused_ordering(541) 00:13:15.772 fused_ordering(542) 00:13:15.772 fused_ordering(543) 00:13:15.772 fused_ordering(544) 00:13:15.772 fused_ordering(545) 00:13:15.772 fused_ordering(546) 00:13:15.772 fused_ordering(547) 00:13:15.772 fused_ordering(548) 00:13:15.772 fused_ordering(549) 00:13:15.772 fused_ordering(550) 00:13:15.772 fused_ordering(551) 00:13:15.772 fused_ordering(552) 00:13:15.772 fused_ordering(553) 00:13:15.772 fused_ordering(554) 00:13:15.772 fused_ordering(555) 00:13:15.772 fused_ordering(556) 00:13:15.772 fused_ordering(557) 00:13:15.772 fused_ordering(558) 00:13:15.772 fused_ordering(559) 00:13:15.772 fused_ordering(560) 00:13:15.772 fused_ordering(561) 00:13:15.772 fused_ordering(562) 00:13:15.772 fused_ordering(563) 00:13:15.772 fused_ordering(564) 00:13:15.772 fused_ordering(565) 00:13:15.772 fused_ordering(566) 00:13:15.772 fused_ordering(567) 00:13:15.772 fused_ordering(568) 00:13:15.772 fused_ordering(569) 00:13:15.772 fused_ordering(570) 00:13:15.772 fused_ordering(571) 00:13:15.772 fused_ordering(572) 00:13:15.772 fused_ordering(573) 00:13:15.772 fused_ordering(574) 00:13:15.772 fused_ordering(575) 00:13:15.772 fused_ordering(576) 00:13:15.772 fused_ordering(577) 00:13:15.772 fused_ordering(578) 00:13:15.772 fused_ordering(579) 00:13:15.772 fused_ordering(580) 00:13:15.772 fused_ordering(581) 00:13:15.772 fused_ordering(582) 00:13:15.772 fused_ordering(583) 00:13:15.772 fused_ordering(584) 00:13:15.772 fused_ordering(585) 00:13:15.772 fused_ordering(586) 00:13:15.772 fused_ordering(587) 00:13:15.772 fused_ordering(588) 00:13:15.772 fused_ordering(589) 00:13:15.772 fused_ordering(590) 00:13:15.772 fused_ordering(591) 00:13:15.772 fused_ordering(592) 00:13:15.772 fused_ordering(593) 00:13:15.772 fused_ordering(594) 00:13:15.772 fused_ordering(595) 00:13:15.772 fused_ordering(596) 00:13:15.772 fused_ordering(597) 00:13:15.772 fused_ordering(598) 00:13:15.772 fused_ordering(599) 00:13:15.772 fused_ordering(600) 00:13:15.772 fused_ordering(601) 00:13:15.772 fused_ordering(602) 00:13:15.772 fused_ordering(603) 00:13:15.772 fused_ordering(604) 00:13:15.772 fused_ordering(605) 00:13:15.772 fused_ordering(606) 00:13:15.772 fused_ordering(607) 00:13:15.772 fused_ordering(608) 00:13:15.772 fused_ordering(609) 00:13:15.772 fused_ordering(610) 00:13:15.772 fused_ordering(611) 00:13:15.772 fused_ordering(612) 00:13:15.772 fused_ordering(613) 00:13:15.772 fused_ordering(614) 00:13:15.772 fused_ordering(615) 00:13:16.337 fused_ordering(616) 00:13:16.337 fused_ordering(617) 00:13:16.337 fused_ordering(618) 00:13:16.337 fused_ordering(619) 00:13:16.337 fused_ordering(620) 00:13:16.337 fused_ordering(621) 00:13:16.337 fused_ordering(622) 00:13:16.337 fused_ordering(623) 00:13:16.337 fused_ordering(624) 00:13:16.337 fused_ordering(625) 00:13:16.337 fused_ordering(626) 00:13:16.337 fused_ordering(627) 00:13:16.337 fused_ordering(628) 00:13:16.337 fused_ordering(629) 00:13:16.337 fused_ordering(630) 00:13:16.337 fused_ordering(631) 00:13:16.337 fused_ordering(632) 00:13:16.337 fused_ordering(633) 00:13:16.337 fused_ordering(634) 00:13:16.337 fused_ordering(635) 00:13:16.337 fused_ordering(636) 00:13:16.337 fused_ordering(637) 00:13:16.337 fused_ordering(638) 00:13:16.337 fused_ordering(639) 00:13:16.337 fused_ordering(640) 00:13:16.337 fused_ordering(641) 00:13:16.337 fused_ordering(642) 00:13:16.337 fused_ordering(643) 00:13:16.337 fused_ordering(644) 00:13:16.337 fused_ordering(645) 00:13:16.337 fused_ordering(646) 00:13:16.337 fused_ordering(647) 00:13:16.337 fused_ordering(648) 00:13:16.337 fused_ordering(649) 00:13:16.337 fused_ordering(650) 00:13:16.337 fused_ordering(651) 00:13:16.337 fused_ordering(652) 00:13:16.337 fused_ordering(653) 00:13:16.337 fused_ordering(654) 00:13:16.337 fused_ordering(655) 00:13:16.337 fused_ordering(656) 00:13:16.337 fused_ordering(657) 00:13:16.337 fused_ordering(658) 00:13:16.337 fused_ordering(659) 00:13:16.337 fused_ordering(660) 00:13:16.337 fused_ordering(661) 00:13:16.337 fused_ordering(662) 00:13:16.337 fused_ordering(663) 00:13:16.337 fused_ordering(664) 00:13:16.337 fused_ordering(665) 00:13:16.337 fused_ordering(666) 00:13:16.337 fused_ordering(667) 00:13:16.337 fused_ordering(668) 00:13:16.337 fused_ordering(669) 00:13:16.337 fused_ordering(670) 00:13:16.337 fused_ordering(671) 00:13:16.337 fused_ordering(672) 00:13:16.337 fused_ordering(673) 00:13:16.337 fused_ordering(674) 00:13:16.337 fused_ordering(675) 00:13:16.337 fused_ordering(676) 00:13:16.337 fused_ordering(677) 00:13:16.337 fused_ordering(678) 00:13:16.337 fused_ordering(679) 00:13:16.337 fused_ordering(680) 00:13:16.337 fused_ordering(681) 00:13:16.337 fused_ordering(682) 00:13:16.337 fused_ordering(683) 00:13:16.337 fused_ordering(684) 00:13:16.337 fused_ordering(685) 00:13:16.337 fused_ordering(686) 00:13:16.337 fused_ordering(687) 00:13:16.337 fused_ordering(688) 00:13:16.337 fused_ordering(689) 00:13:16.337 fused_ordering(690) 00:13:16.337 fused_ordering(691) 00:13:16.337 fused_ordering(692) 00:13:16.337 fused_ordering(693) 00:13:16.337 fused_ordering(694) 00:13:16.337 fused_ordering(695) 00:13:16.337 fused_ordering(696) 00:13:16.337 fused_ordering(697) 00:13:16.337 fused_ordering(698) 00:13:16.337 fused_ordering(699) 00:13:16.337 fused_ordering(700) 00:13:16.337 fused_ordering(701) 00:13:16.337 fused_ordering(702) 00:13:16.337 fused_ordering(703) 00:13:16.337 fused_ordering(704) 00:13:16.337 fused_ordering(705) 00:13:16.337 fused_ordering(706) 00:13:16.337 fused_ordering(707) 00:13:16.337 fused_ordering(708) 00:13:16.337 fused_ordering(709) 00:13:16.337 fused_ordering(710) 00:13:16.337 fused_ordering(711) 00:13:16.337 fused_ordering(712) 00:13:16.337 fused_ordering(713) 00:13:16.337 fused_ordering(714) 00:13:16.337 fused_ordering(715) 00:13:16.337 fused_ordering(716) 00:13:16.337 fused_ordering(717) 00:13:16.337 fused_ordering(718) 00:13:16.337 fused_ordering(719) 00:13:16.337 fused_ordering(720) 00:13:16.337 fused_ordering(721) 00:13:16.337 fused_ordering(722) 00:13:16.337 fused_ordering(723) 00:13:16.337 fused_ordering(724) 00:13:16.337 fused_ordering(725) 00:13:16.337 fused_ordering(726) 00:13:16.337 fused_ordering(727) 00:13:16.337 fused_ordering(728) 00:13:16.337 fused_ordering(729) 00:13:16.337 fused_ordering(730) 00:13:16.337 fused_ordering(731) 00:13:16.337 fused_ordering(732) 00:13:16.337 fused_ordering(733) 00:13:16.337 fused_ordering(734) 00:13:16.337 fused_ordering(735) 00:13:16.337 fused_ordering(736) 00:13:16.337 fused_ordering(737) 00:13:16.337 fused_ordering(738) 00:13:16.337 fused_ordering(739) 00:13:16.337 fused_ordering(740) 00:13:16.337 fused_ordering(741) 00:13:16.337 fused_ordering(742) 00:13:16.337 fused_ordering(743) 00:13:16.337 fused_ordering(744) 00:13:16.337 fused_ordering(745) 00:13:16.337 fused_ordering(746) 00:13:16.337 fused_ordering(747) 00:13:16.337 fused_ordering(748) 00:13:16.337 fused_ordering(749) 00:13:16.337 fused_ordering(750) 00:13:16.337 fused_ordering(751) 00:13:16.337 fused_ordering(752) 00:13:16.337 fused_ordering(753) 00:13:16.337 fused_ordering(754) 00:13:16.337 fused_ordering(755) 00:13:16.337 fused_ordering(756) 00:13:16.337 fused_ordering(757) 00:13:16.337 fused_ordering(758) 00:13:16.337 fused_ordering(759) 00:13:16.337 fused_ordering(760) 00:13:16.337 fused_ordering(761) 00:13:16.337 fused_ordering(762) 00:13:16.337 fused_ordering(763) 00:13:16.337 fused_ordering(764) 00:13:16.337 fused_ordering(765) 00:13:16.337 fused_ordering(766) 00:13:16.337 fused_ordering(767) 00:13:16.337 fused_ordering(768) 00:13:16.337 fused_ordering(769) 00:13:16.337 fused_ordering(770) 00:13:16.337 fused_ordering(771) 00:13:16.337 fused_ordering(772) 00:13:16.337 fused_ordering(773) 00:13:16.337 fused_ordering(774) 00:13:16.337 fused_ordering(775) 00:13:16.337 fused_ordering(776) 00:13:16.337 fused_ordering(777) 00:13:16.337 fused_ordering(778) 00:13:16.337 fused_ordering(779) 00:13:16.337 fused_ordering(780) 00:13:16.337 fused_ordering(781) 00:13:16.337 fused_ordering(782) 00:13:16.337 fused_ordering(783) 00:13:16.337 fused_ordering(784) 00:13:16.337 fused_ordering(785) 00:13:16.337 fused_ordering(786) 00:13:16.337 fused_ordering(787) 00:13:16.337 fused_ordering(788) 00:13:16.337 fused_ordering(789) 00:13:16.337 fused_ordering(790) 00:13:16.337 fused_ordering(791) 00:13:16.337 fused_ordering(792) 00:13:16.337 fused_ordering(793) 00:13:16.337 fused_ordering(794) 00:13:16.337 fused_ordering(795) 00:13:16.337 fused_ordering(796) 00:13:16.337 fused_ordering(797) 00:13:16.337 fused_ordering(798) 00:13:16.337 fused_ordering(799) 00:13:16.337 fused_ordering(800) 00:13:16.337 fused_ordering(801) 00:13:16.337 fused_ordering(802) 00:13:16.337 fused_ordering(803) 00:13:16.337 fused_ordering(804) 00:13:16.337 fused_ordering(805) 00:13:16.337 fused_ordering(806) 00:13:16.337 fused_ordering(807) 00:13:16.337 fused_ordering(808) 00:13:16.337 fused_ordering(809) 00:13:16.337 fused_ordering(810) 00:13:16.337 fused_ordering(811) 00:13:16.337 fused_ordering(812) 00:13:16.337 fused_ordering(813) 00:13:16.337 fused_ordering(814) 00:13:16.337 fused_ordering(815) 00:13:16.337 fused_ordering(816) 00:13:16.337 fused_ordering(817) 00:13:16.337 fused_ordering(818) 00:13:16.337 fused_ordering(819) 00:13:16.337 fused_ordering(820) 00:13:16.596 fused_o[2024-11-19 13:05:19.884388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ce1f0 is same with the state(6) to be set 00:13:16.596 rdering(821) 00:13:16.596 fused_ordering(822) 00:13:16.596 fused_ordering(823) 00:13:16.596 fused_ordering(824) 00:13:16.596 fused_ordering(825) 00:13:16.596 fused_ordering(826) 00:13:16.596 fused_ordering(827) 00:13:16.596 fused_ordering(828) 00:13:16.596 fused_ordering(829) 00:13:16.596 fused_ordering(830) 00:13:16.596 fused_ordering(831) 00:13:16.596 fused_ordering(832) 00:13:16.596 fused_ordering(833) 00:13:16.596 fused_ordering(834) 00:13:16.596 fused_ordering(835) 00:13:16.596 fused_ordering(836) 00:13:16.596 fused_ordering(837) 00:13:16.596 fused_ordering(838) 00:13:16.596 fused_ordering(839) 00:13:16.596 fused_ordering(840) 00:13:16.596 fused_ordering(841) 00:13:16.596 fused_ordering(842) 00:13:16.596 fused_ordering(843) 00:13:16.596 fused_ordering(844) 00:13:16.596 fused_ordering(845) 00:13:16.596 fused_ordering(846) 00:13:16.596 fused_ordering(847) 00:13:16.596 fused_ordering(848) 00:13:16.596 fused_ordering(849) 00:13:16.596 fused_ordering(850) 00:13:16.596 fused_ordering(851) 00:13:16.596 fused_ordering(852) 00:13:16.596 fused_ordering(853) 00:13:16.596 fused_ordering(854) 00:13:16.596 fused_ordering(855) 00:13:16.596 fused_ordering(856) 00:13:16.596 fused_ordering(857) 00:13:16.596 fused_ordering(858) 00:13:16.596 fused_ordering(859) 00:13:16.596 fused_ordering(860) 00:13:16.596 fused_ordering(861) 00:13:16.596 fused_ordering(862) 00:13:16.596 fused_ordering(863) 00:13:16.596 fused_ordering(864) 00:13:16.597 fused_ordering(865) 00:13:16.597 fused_ordering(866) 00:13:16.597 fused_ordering(867) 00:13:16.597 fused_ordering(868) 00:13:16.597 fused_ordering(869) 00:13:16.597 fused_ordering(870) 00:13:16.597 fused_ordering(871) 00:13:16.597 fused_ordering(872) 00:13:16.597 fused_ordering(873) 00:13:16.597 fused_ordering(874) 00:13:16.597 fused_ordering(875) 00:13:16.597 fused_ordering(876) 00:13:16.597 fused_ordering(877) 00:13:16.597 fused_ordering(878) 00:13:16.597 fused_ordering(879) 00:13:16.597 fused_ordering(880) 00:13:16.597 fused_ordering(881) 00:13:16.597 fused_ordering(882) 00:13:16.597 fused_ordering(883) 00:13:16.597 fused_ordering(884) 00:13:16.597 fused_ordering(885) 00:13:16.597 fused_ordering(886) 00:13:16.597 fused_ordering(887) 00:13:16.597 fused_ordering(888) 00:13:16.597 fused_ordering(889) 00:13:16.597 fused_ordering(890) 00:13:16.597 fused_ordering(891) 00:13:16.597 fused_ordering(892) 00:13:16.597 fused_ordering(893) 00:13:16.597 fused_ordering(894) 00:13:16.597 fused_ordering(895) 00:13:16.597 fused_ordering(896) 00:13:16.597 fused_ordering(897) 00:13:16.597 fused_ordering(898) 00:13:16.597 fused_ordering(899) 00:13:16.597 fused_ordering(900) 00:13:16.597 fused_ordering(901) 00:13:16.597 fused_ordering(902) 00:13:16.597 fused_ordering(903) 00:13:16.597 fused_ordering(904) 00:13:16.597 fused_ordering(905) 00:13:16.597 fused_ordering(906) 00:13:16.597 fused_ordering(907) 00:13:16.597 fused_ordering(908) 00:13:16.597 fused_ordering(909) 00:13:16.597 fused_ordering(910) 00:13:16.597 fused_ordering(911) 00:13:16.597 fused_ordering(912) 00:13:16.597 fused_ordering(913) 00:13:16.597 fused_ordering(914) 00:13:16.597 fused_ordering(915) 00:13:16.597 fused_ordering(916) 00:13:16.597 fused_ordering(917) 00:13:16.597 fused_ordering(918) 00:13:16.597 fused_ordering(919) 00:13:16.597 fused_ordering(920) 00:13:16.597 fused_ordering(921) 00:13:16.597 fused_ordering(922) 00:13:16.597 fused_ordering(923) 00:13:16.597 fused_ordering(924) 00:13:16.597 fused_ordering(925) 00:13:16.597 fused_ordering(926) 00:13:16.597 fused_ordering(927) 00:13:16.597 fused_ordering(928) 00:13:16.597 fused_ordering(929) 00:13:16.597 fused_ordering(930) 00:13:16.597 fused_ordering(931) 00:13:16.597 fused_ordering(932) 00:13:16.597 fused_ordering(933) 00:13:16.597 fused_ordering(934) 00:13:16.597 fused_ordering(935) 00:13:16.597 fused_ordering(936) 00:13:16.597 fused_ordering(937) 00:13:16.597 fused_ordering(938) 00:13:16.597 fused_ordering(939) 00:13:16.597 fused_ordering(940) 00:13:16.597 fused_ordering(941) 00:13:16.597 fused_ordering(942) 00:13:16.597 fused_ordering(943) 00:13:16.597 fused_ordering(944) 00:13:16.597 fused_ordering(945) 00:13:16.597 fused_ordering(946) 00:13:16.597 fused_ordering(947) 00:13:16.597 fused_ordering(948) 00:13:16.597 fused_ordering(949) 00:13:16.597 fused_ordering(950) 00:13:16.597 fused_ordering(951) 00:13:16.597 fused_ordering(952) 00:13:16.597 fused_ordering(953) 00:13:16.597 fused_ordering(954) 00:13:16.597 fused_ordering(955) 00:13:16.597 fused_ordering(956) 00:13:16.597 fused_ordering(957) 00:13:16.597 fused_ordering(958) 00:13:16.597 fused_ordering(959) 00:13:16.597 fused_ordering(960) 00:13:16.597 fused_ordering(961) 00:13:16.597 fused_ordering(962) 00:13:16.597 fused_ordering(963) 00:13:16.597 fused_ordering(964) 00:13:16.597 fused_ordering(965) 00:13:16.597 fused_ordering(966) 00:13:16.597 fused_ordering(967) 00:13:16.597 fused_ordering(968) 00:13:16.597 fused_ordering(969) 00:13:16.597 fused_ordering(970) 00:13:16.597 fused_ordering(971) 00:13:16.597 fused_ordering(972) 00:13:16.597 fused_ordering(973) 00:13:16.597 fused_ordering(974) 00:13:16.597 fused_ordering(975) 00:13:16.597 fused_ordering(976) 00:13:16.597 fused_ordering(977) 00:13:16.597 fused_ordering(978) 00:13:16.597 fused_ordering(979) 00:13:16.597 fused_ordering(980) 00:13:16.597 fused_ordering(981) 00:13:16.597 fused_ordering(982) 00:13:16.597 fused_ordering(983) 00:13:16.597 fused_ordering(984) 00:13:16.597 fused_ordering(985) 00:13:16.597 fused_ordering(986) 00:13:16.597 fused_ordering(987) 00:13:16.597 fused_ordering(988) 00:13:16.597 fused_ordering(989) 00:13:16.597 fused_ordering(990) 00:13:16.597 fused_ordering(991) 00:13:16.597 fused_ordering(992) 00:13:16.597 fused_ordering(993) 00:13:16.597 fused_ordering(994) 00:13:16.597 fused_ordering(995) 00:13:16.597 fused_ordering(996) 00:13:16.597 fused_ordering(997) 00:13:16.597 fused_ordering(998) 00:13:16.597 fused_ordering(999) 00:13:16.597 fused_ordering(1000) 00:13:16.597 fused_ordering(1001) 00:13:16.597 fused_ordering(1002) 00:13:16.597 fused_ordering(1003) 00:13:16.597 fused_ordering(1004) 00:13:16.597 fused_ordering(1005) 00:13:16.597 fused_ordering(1006) 00:13:16.597 fused_ordering(1007) 00:13:16.597 fused_ordering(1008) 00:13:16.597 fused_ordering(1009) 00:13:16.597 fused_ordering(1010) 00:13:16.597 fused_ordering(1011) 00:13:16.597 fused_ordering(1012) 00:13:16.597 fused_ordering(1013) 00:13:16.597 fused_ordering(1014) 00:13:16.597 fused_ordering(1015) 00:13:16.597 fused_ordering(1016) 00:13:16.597 fused_ordering(1017) 00:13:16.597 fused_ordering(1018) 00:13:16.597 fused_ordering(1019) 00:13:16.597 fused_ordering(1020) 00:13:16.597 fused_ordering(1021) 00:13:16.597 fused_ordering(1022) 00:13:16.597 fused_ordering(1023) 00:13:16.597 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:16.597 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:16.597 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:16.597 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:16.597 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:16.597 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:16.597 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:16.597 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:16.597 rmmod nvme_tcp 00:13:16.597 rmmod nvme_fabrics 00:13:16.597 rmmod nvme_keyring 00:13:16.597 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:16.597 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:16.597 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:16.597 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2793694 ']' 00:13:16.597 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2793694 00:13:16.597 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2793694 ']' 00:13:16.597 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2793694 00:13:16.597 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:16.597 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:16.597 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2793694 00:13:16.856 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:16.856 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:16.856 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2793694' 00:13:16.856 killing process with pid 2793694 00:13:16.856 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2793694 00:13:16.856 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2793694 00:13:16.856 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:16.856 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:16.856 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:16.856 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:16.856 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:16.856 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:16.856 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:16.856 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:16.856 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:16.856 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.856 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:16.856 13:05:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:19.388 00:13:19.388 real 0m10.672s 00:13:19.388 user 0m4.911s 00:13:19.388 sys 0m5.861s 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:19.388 ************************************ 00:13:19.388 END TEST nvmf_fused_ordering 00:13:19.388 ************************************ 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:19.388 ************************************ 00:13:19.388 START TEST nvmf_ns_masking 00:13:19.388 ************************************ 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:19.388 * Looking for test storage... 00:13:19.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:19.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.388 --rc genhtml_branch_coverage=1 00:13:19.388 --rc genhtml_function_coverage=1 00:13:19.388 --rc genhtml_legend=1 00:13:19.388 --rc geninfo_all_blocks=1 00:13:19.388 --rc geninfo_unexecuted_blocks=1 00:13:19.388 00:13:19.388 ' 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:19.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.388 --rc genhtml_branch_coverage=1 00:13:19.388 --rc genhtml_function_coverage=1 00:13:19.388 --rc genhtml_legend=1 00:13:19.388 --rc geninfo_all_blocks=1 00:13:19.388 --rc geninfo_unexecuted_blocks=1 00:13:19.388 00:13:19.388 ' 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:19.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.388 --rc genhtml_branch_coverage=1 00:13:19.388 --rc genhtml_function_coverage=1 00:13:19.388 --rc genhtml_legend=1 00:13:19.388 --rc geninfo_all_blocks=1 00:13:19.388 --rc geninfo_unexecuted_blocks=1 00:13:19.388 00:13:19.388 ' 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:19.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.388 --rc genhtml_branch_coverage=1 00:13:19.388 --rc genhtml_function_coverage=1 00:13:19.388 --rc genhtml_legend=1 00:13:19.388 --rc geninfo_all_blocks=1 00:13:19.388 --rc geninfo_unexecuted_blocks=1 00:13:19.388 00:13:19.388 ' 00:13:19.388 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:19.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=6b213d40-0739-4ef9-a699-de75340561ac 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c96824be-4a0c-46a2-9ef0-6942fc71dc21 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=820d89dd-b07b-4887-ab4c-27816361c0eb 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:19.389 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:25.972 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:25.972 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:25.972 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:25.972 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:25.972 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:25.972 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:25.972 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:25.972 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:25.972 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:25.972 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:25.972 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:25.972 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:25.972 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:25.972 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:25.972 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:25.972 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:25.972 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:25.972 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:25.973 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:25.973 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:25.973 Found net devices under 0000:86:00.0: cvl_0_0 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:25.973 Found net devices under 0000:86:00.1: cvl_0_1 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:25.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:25.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:13:25.973 00:13:25.973 --- 10.0.0.2 ping statistics --- 00:13:25.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.973 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:25.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:13:25.973 00:13:25.973 --- 10.0.0.1 ping statistics --- 00:13:25.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.973 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2797514 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2797514 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:25.973 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2797514 ']' 00:13:25.974 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.974 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:25.974 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.974 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:25.974 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:25.974 [2024-11-19 13:05:28.565974] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:13:25.974 [2024-11-19 13:05:28.566033] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.974 [2024-11-19 13:05:28.642037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.974 [2024-11-19 13:05:28.683476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.974 [2024-11-19 13:05:28.683511] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.974 [2024-11-19 13:05:28.683519] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.974 [2024-11-19 13:05:28.683526] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.974 [2024-11-19 13:05:28.683531] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.974 [2024-11-19 13:05:28.684122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.974 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:25.974 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:25.974 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:25.974 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:25.974 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:25.974 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.974 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:25.974 [2024-11-19 13:05:28.997014] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.974 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:25.974 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:25.974 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:25.974 Malloc1 00:13:25.974 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:26.234 Malloc2 00:13:26.234 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:26.491 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:26.491 13:05:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.750 [2024-11-19 13:05:29.998985] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.750 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:26.750 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 820d89dd-b07b-4887-ab4c-27816361c0eb -a 10.0.0.2 -s 4420 -i 4 00:13:27.008 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:27.008 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:27.008 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:27.008 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:27.008 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:28.909 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:28.909 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:28.909 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:28.909 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:28.909 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:28.909 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:28.909 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:28.909 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:28.909 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:28.909 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:28.909 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:28.909 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:28.909 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:28.909 [ 0]:0x1 00:13:28.909 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:28.909 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:29.168 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3225fb97d659484c803669f99253b623 00:13:29.168 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3225fb97d659484c803669f99253b623 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:29.168 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:29.168 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:29.168 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:29.168 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:29.168 [ 0]:0x1 00:13:29.168 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:29.168 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:29.426 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3225fb97d659484c803669f99253b623 00:13:29.426 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3225fb97d659484c803669f99253b623 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:29.426 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:29.426 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:29.426 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:29.426 [ 1]:0x2 00:13:29.426 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:29.426 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:29.426 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=481e4b8a986f431186e7cb750813f0a1 00:13:29.426 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 481e4b8a986f431186e7cb750813f0a1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:29.426 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:29.426 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:29.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.426 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.684 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:29.942 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:29.942 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 820d89dd-b07b-4887-ab4c-27816361c0eb -a 10.0.0.2 -s 4420 -i 4 00:13:30.200 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:30.200 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:30.200 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:30.200 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:30.200 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:30.200 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:32.101 [ 0]:0x2 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:32.101 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:32.359 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=481e4b8a986f431186e7cb750813f0a1 00:13:32.359 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 481e4b8a986f431186e7cb750813f0a1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:32.360 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:32.360 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:32.360 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:32.360 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:32.360 [ 0]:0x1 00:13:32.618 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:32.618 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:32.618 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3225fb97d659484c803669f99253b623 00:13:32.618 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3225fb97d659484c803669f99253b623 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:32.618 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:32.618 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:32.618 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:32.618 [ 1]:0x2 00:13:32.618 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:32.618 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:32.618 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=481e4b8a986f431186e7cb750813f0a1 00:13:32.618 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 481e4b8a986f431186e7cb750813f0a1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:32.618 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:32.877 [ 0]:0x2 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=481e4b8a986f431186e7cb750813f0a1 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 481e4b8a986f431186e7cb750813f0a1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:32.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.877 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:33.135 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:33.136 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 820d89dd-b07b-4887-ab4c-27816361c0eb -a 10.0.0.2 -s 4420 -i 4 00:13:33.136 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:33.136 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:33.136 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:33.136 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:33.136 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:33.136 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:35.668 [ 0]:0x1 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3225fb97d659484c803669f99253b623 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3225fb97d659484c803669f99253b623 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:35.668 [ 1]:0x2 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=481e4b8a986f431186e7cb750813f0a1 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 481e4b8a986f431186e7cb750813f0a1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:35.668 [ 0]:0x2 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=481e4b8a986f431186e7cb750813f0a1 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 481e4b8a986f431186e7cb750813f0a1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:35.668 13:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:35.927 [2024-11-19 13:05:39.157713] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:35.927 request: 00:13:35.927 { 00:13:35.927 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:35.927 "nsid": 2, 00:13:35.927 "host": "nqn.2016-06.io.spdk:host1", 00:13:35.927 "method": "nvmf_ns_remove_host", 00:13:35.927 "req_id": 1 00:13:35.927 } 00:13:35.927 Got JSON-RPC error response 00:13:35.927 response: 00:13:35.927 { 00:13:35.927 "code": -32602, 00:13:35.927 "message": "Invalid parameters" 00:13:35.927 } 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:35.927 [ 0]:0x2 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=481e4b8a986f431186e7cb750813f0a1 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 481e4b8a986f431186e7cb750813f0a1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:35.927 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:36.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.185 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2799476 00:13:36.186 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:36.186 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.186 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2799476 /var/tmp/host.sock 00:13:36.186 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2799476 ']' 00:13:36.186 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:36.186 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:36.186 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:36.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:36.186 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:36.186 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:36.186 [2024-11-19 13:05:39.390307] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:13:36.186 [2024-11-19 13:05:39.390355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2799476 ] 00:13:36.186 [2024-11-19 13:05:39.466007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.186 [2024-11-19 13:05:39.508513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.443 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:36.443 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:36.443 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.701 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:36.960 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 6b213d40-0739-4ef9-a699-de75340561ac 00:13:36.960 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:36.960 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6B213D4007394EF9A699DE75340561AC -i 00:13:37.220 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c96824be-4a0c-46a2-9ef0-6942fc71dc21 00:13:37.220 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:37.220 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C96824BE4A0C46A29EF06942FC71DC21 -i 00:13:37.220 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:37.478 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:37.736 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:37.736 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:37.995 nvme0n1 00:13:37.995 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:37.995 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:38.562 nvme1n2 00:13:38.562 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:38.562 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:38.562 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:38.562 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:38.562 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:38.562 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:38.563 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:38.563 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:38.563 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:38.821 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 6b213d40-0739-4ef9-a699-de75340561ac == \6\b\2\1\3\d\4\0\-\0\7\3\9\-\4\e\f\9\-\a\6\9\9\-\d\e\7\5\3\4\0\5\6\1\a\c ]] 00:13:38.822 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:38.822 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:38.822 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:39.080 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c96824be-4a0c-46a2-9ef0-6942fc71dc21 == \c\9\6\8\2\4\b\e\-\4\a\0\c\-\4\6\a\2\-\9\e\f\0\-\6\9\4\2\f\c\7\1\d\c\2\1 ]] 00:13:39.080 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.338 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:39.338 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 6b213d40-0739-4ef9-a699-de75340561ac 00:13:39.338 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:39.338 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 6B213D4007394EF9A699DE75340561AC 00:13:39.338 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:39.338 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 6B213D4007394EF9A699DE75340561AC 00:13:39.338 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:39.338 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.338 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:39.338 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.338 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:39.338 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.338 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:39.338 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:39.338 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 6B213D4007394EF9A699DE75340561AC 00:13:39.597 [2024-11-19 13:05:42.851944] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:39.597 [2024-11-19 13:05:42.851988] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:39.597 [2024-11-19 13:05:42.851996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.597 request: 00:13:39.597 { 00:13:39.597 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:39.597 "namespace": { 00:13:39.597 "bdev_name": "invalid", 00:13:39.597 "nsid": 1, 00:13:39.597 "nguid": "6B213D4007394EF9A699DE75340561AC", 00:13:39.597 "no_auto_visible": false 00:13:39.597 }, 00:13:39.597 "method": "nvmf_subsystem_add_ns", 00:13:39.597 "req_id": 1 00:13:39.597 } 00:13:39.597 Got JSON-RPC error response 00:13:39.597 response: 00:13:39.597 { 00:13:39.597 "code": -32602, 00:13:39.597 "message": "Invalid parameters" 00:13:39.597 } 00:13:39.597 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:39.597 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:39.597 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:39.597 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:39.597 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 6b213d40-0739-4ef9-a699-de75340561ac 00:13:39.597 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:39.597 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6B213D4007394EF9A699DE75340561AC -i 00:13:39.856 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:41.758 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:41.758 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:41.758 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:42.016 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:42.016 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2799476 00:13:42.016 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2799476 ']' 00:13:42.016 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2799476 00:13:42.016 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:42.016 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:42.016 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2799476 00:13:42.016 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:42.016 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:42.016 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2799476' 00:13:42.016 killing process with pid 2799476 00:13:42.016 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2799476 00:13:42.016 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2799476 00:13:42.273 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.530 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:42.530 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:42.530 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:42.530 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:42.530 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:42.530 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:42.530 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:42.530 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:42.530 rmmod nvme_tcp 00:13:42.530 rmmod nvme_fabrics 00:13:42.530 rmmod nvme_keyring 00:13:42.530 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:42.530 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:42.530 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:42.530 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2797514 ']' 00:13:42.530 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2797514 00:13:42.530 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2797514 ']' 00:13:42.530 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2797514 00:13:42.530 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:42.530 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:42.530 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2797514 00:13:42.788 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:42.788 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:42.788 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2797514' 00:13:42.788 killing process with pid 2797514 00:13:42.788 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2797514 00:13:42.788 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2797514 00:13:42.788 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:42.788 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:42.788 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:42.788 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:42.788 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:42.788 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:42.788 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:42.788 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:42.788 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:42.788 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.788 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.788 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:45.319 00:13:45.319 real 0m25.886s 00:13:45.319 user 0m31.095s 00:13:45.319 sys 0m7.074s 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:45.319 ************************************ 00:13:45.319 END TEST nvmf_ns_masking 00:13:45.319 ************************************ 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:45.319 ************************************ 00:13:45.319 START TEST nvmf_nvme_cli 00:13:45.319 ************************************ 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:45.319 * Looking for test storage... 00:13:45.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:45.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.319 --rc genhtml_branch_coverage=1 00:13:45.319 --rc genhtml_function_coverage=1 00:13:45.319 --rc genhtml_legend=1 00:13:45.319 --rc geninfo_all_blocks=1 00:13:45.319 --rc geninfo_unexecuted_blocks=1 00:13:45.319 00:13:45.319 ' 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:45.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.319 --rc genhtml_branch_coverage=1 00:13:45.319 --rc genhtml_function_coverage=1 00:13:45.319 --rc genhtml_legend=1 00:13:45.319 --rc geninfo_all_blocks=1 00:13:45.319 --rc geninfo_unexecuted_blocks=1 00:13:45.319 00:13:45.319 ' 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:45.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.319 --rc genhtml_branch_coverage=1 00:13:45.319 --rc genhtml_function_coverage=1 00:13:45.319 --rc genhtml_legend=1 00:13:45.319 --rc geninfo_all_blocks=1 00:13:45.319 --rc geninfo_unexecuted_blocks=1 00:13:45.319 00:13:45.319 ' 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:45.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.319 --rc genhtml_branch_coverage=1 00:13:45.319 --rc genhtml_function_coverage=1 00:13:45.319 --rc genhtml_legend=1 00:13:45.319 --rc geninfo_all_blocks=1 00:13:45.319 --rc geninfo_unexecuted_blocks=1 00:13:45.319 00:13:45.319 ' 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:45.319 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:45.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:45.320 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:52.044 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:52.044 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:52.044 Found net devices under 0000:86:00.0: cvl_0_0 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:52.044 Found net devices under 0000:86:00.1: cvl_0_1 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:52.044 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:52.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:13:52.045 00:13:52.045 --- 10.0.0.2 ping statistics --- 00:13:52.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.045 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:52.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:13:52.045 00:13:52.045 --- 10.0.0.1 ping statistics --- 00:13:52.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.045 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2804188 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2804188 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2804188 ']' 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:52.045 [2024-11-19 13:05:54.481817] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:13:52.045 [2024-11-19 13:05:54.481861] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.045 [2024-11-19 13:05:54.558879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:52.045 [2024-11-19 13:05:54.600716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.045 [2024-11-19 13:05:54.600756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.045 [2024-11-19 13:05:54.600763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.045 [2024-11-19 13:05:54.600769] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.045 [2024-11-19 13:05:54.600774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.045 [2024-11-19 13:05:54.602392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.045 [2024-11-19 13:05:54.602499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.045 [2024-11-19 13:05:54.602609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.045 [2024-11-19 13:05:54.602610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:52.045 [2024-11-19 13:05:54.751967] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:52.045 Malloc0 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:52.045 Malloc1 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:52.045 [2024-11-19 13:05:54.855293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.045 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:52.045 00:13:52.045 Discovery Log Number of Records 2, Generation counter 2 00:13:52.045 =====Discovery Log Entry 0====== 00:13:52.045 trtype: tcp 00:13:52.045 adrfam: ipv4 00:13:52.045 subtype: current discovery subsystem 00:13:52.045 treq: not required 00:13:52.045 portid: 0 00:13:52.045 trsvcid: 4420 00:13:52.045 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:52.045 traddr: 10.0.0.2 00:13:52.045 eflags: explicit discovery connections, duplicate discovery information 00:13:52.045 sectype: none 00:13:52.045 =====Discovery Log Entry 1====== 00:13:52.045 trtype: tcp 00:13:52.045 adrfam: ipv4 00:13:52.045 subtype: nvme subsystem 00:13:52.045 treq: not required 00:13:52.045 portid: 0 00:13:52.045 trsvcid: 4420 00:13:52.045 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:52.045 traddr: 10.0.0.2 00:13:52.045 eflags: none 00:13:52.045 sectype: none 00:13:52.045 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:52.045 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:52.045 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:52.045 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:52.045 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:52.045 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:52.045 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:52.046 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:52.046 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:52.046 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:52.046 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:52.993 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:52.993 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:52.993 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:52.993 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:52.993 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:52.993 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:54.894 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:54.894 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:54.894 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:55.152 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:55.152 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:55.152 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:55.152 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:55.152 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:55.152 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:55.152 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:55.152 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:55.152 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:55.152 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:55.152 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:55.152 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:55.152 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:55.152 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:55.152 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:55.152 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:55.152 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:55.152 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:55.152 /dev/nvme0n2 ]] 00:13:55.152 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:55.152 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:55.152 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:55.152 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:55.152 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:55.410 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:55.410 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:55.410 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:55.410 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:55.410 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:55.410 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:55.410 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:55.410 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:55.410 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:55.410 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:55.410 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:55.410 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:55.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:55.668 rmmod nvme_tcp 00:13:55.668 rmmod nvme_fabrics 00:13:55.668 rmmod nvme_keyring 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2804188 ']' 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2804188 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2804188 ']' 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2804188 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:55.668 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2804188 00:13:55.668 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:55.668 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:55.668 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2804188' 00:13:55.668 killing process with pid 2804188 00:13:55.668 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2804188 00:13:55.668 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2804188 00:13:55.927 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:55.927 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:55.927 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:55.927 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:55.927 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:55.927 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:55.927 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:55.927 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:55.927 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:55.927 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.927 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:55.927 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.457 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:58.457 00:13:58.457 real 0m13.019s 00:13:58.457 user 0m20.206s 00:13:58.457 sys 0m5.053s 00:13:58.457 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.457 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:58.457 ************************************ 00:13:58.457 END TEST nvmf_nvme_cli 00:13:58.457 ************************************ 00:13:58.457 13:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:58.457 13:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:58.457 13:06:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:58.457 13:06:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.457 13:06:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:58.457 ************************************ 00:13:58.457 START TEST nvmf_vfio_user 00:13:58.457 ************************************ 00:13:58.457 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:58.457 * Looking for test storage... 00:13:58.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:58.457 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:58.457 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:13:58.457 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:58.457 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:58.457 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:58.457 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:58.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.458 --rc genhtml_branch_coverage=1 00:13:58.458 --rc genhtml_function_coverage=1 00:13:58.458 --rc genhtml_legend=1 00:13:58.458 --rc geninfo_all_blocks=1 00:13:58.458 --rc geninfo_unexecuted_blocks=1 00:13:58.458 00:13:58.458 ' 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:58.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.458 --rc genhtml_branch_coverage=1 00:13:58.458 --rc genhtml_function_coverage=1 00:13:58.458 --rc genhtml_legend=1 00:13:58.458 --rc geninfo_all_blocks=1 00:13:58.458 --rc geninfo_unexecuted_blocks=1 00:13:58.458 00:13:58.458 ' 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:58.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.458 --rc genhtml_branch_coverage=1 00:13:58.458 --rc genhtml_function_coverage=1 00:13:58.458 --rc genhtml_legend=1 00:13:58.458 --rc geninfo_all_blocks=1 00:13:58.458 --rc geninfo_unexecuted_blocks=1 00:13:58.458 00:13:58.458 ' 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:58.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.458 --rc genhtml_branch_coverage=1 00:13:58.458 --rc genhtml_function_coverage=1 00:13:58.458 --rc genhtml_legend=1 00:13:58.458 --rc geninfo_all_blocks=1 00:13:58.458 --rc geninfo_unexecuted_blocks=1 00:13:58.458 00:13:58.458 ' 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:58.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:58.458 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:58.459 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:58.459 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:58.459 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:58.459 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:58.459 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2805592 00:13:58.459 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2805592' 00:13:58.459 Process pid: 2805592 00:13:58.459 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:58.459 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2805592 00:13:58.459 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:58.459 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2805592 ']' 00:13:58.459 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.459 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.459 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.459 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.459 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:58.459 [2024-11-19 13:06:01.652053] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:13:58.459 [2024-11-19 13:06:01.652105] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.459 [2024-11-19 13:06:01.729107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:58.459 [2024-11-19 13:06:01.773176] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.459 [2024-11-19 13:06:01.773215] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.459 [2024-11-19 13:06:01.773222] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.459 [2024-11-19 13:06:01.773229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.459 [2024-11-19 13:06:01.773233] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.459 [2024-11-19 13:06:01.774678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.459 [2024-11-19 13:06:01.774787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.459 [2024-11-19 13:06:01.774894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.459 [2024-11-19 13:06:01.774895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:58.717 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.717 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:58.717 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:59.649 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:59.907 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:59.907 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:59.907 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:59.907 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:59.907 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:00.165 Malloc1 00:14:00.165 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:00.165 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:00.423 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:00.680 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:00.680 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:00.680 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:00.938 Malloc2 00:14:00.938 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:01.196 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:01.196 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:01.454 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:01.454 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:01.454 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:01.454 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:01.454 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:01.454 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:01.454 [2024-11-19 13:06:04.774777] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:14:01.454 [2024-11-19 13:06:04.774814] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2806091 ] 00:14:01.454 [2024-11-19 13:06:04.812942] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:01.454 [2024-11-19 13:06:04.825240] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:01.454 [2024-11-19 13:06:04.825263] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f895e5fb000 00:14:01.454 [2024-11-19 13:06:04.826239] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:01.454 [2024-11-19 13:06:04.827239] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:01.454 [2024-11-19 13:06:04.828246] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:01.454 [2024-11-19 13:06:04.829249] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:01.713 [2024-11-19 13:06:04.830265] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:01.713 [2024-11-19 13:06:04.831257] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:01.713 [2024-11-19 13:06:04.832270] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:01.713 [2024-11-19 13:06:04.833277] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:01.713 [2024-11-19 13:06:04.834286] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:01.713 [2024-11-19 13:06:04.834296] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f895e5f0000 00:14:01.713 [2024-11-19 13:06:04.835238] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:01.713 [2024-11-19 13:06:04.847851] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:01.713 [2024-11-19 13:06:04.847875] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:01.713 [2024-11-19 13:06:04.850380] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:01.713 [2024-11-19 13:06:04.850418] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:01.713 [2024-11-19 13:06:04.850487] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:01.713 [2024-11-19 13:06:04.850501] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:01.713 [2024-11-19 13:06:04.850507] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:01.713 [2024-11-19 13:06:04.851374] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:01.713 [2024-11-19 13:06:04.851383] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:01.713 [2024-11-19 13:06:04.851390] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:01.713 [2024-11-19 13:06:04.852386] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:01.713 [2024-11-19 13:06:04.852395] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:01.713 [2024-11-19 13:06:04.852402] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:01.713 [2024-11-19 13:06:04.853388] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:01.713 [2024-11-19 13:06:04.853397] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:01.713 [2024-11-19 13:06:04.854400] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:01.713 [2024-11-19 13:06:04.854409] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:01.713 [2024-11-19 13:06:04.854414] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:01.714 [2024-11-19 13:06:04.854420] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:01.714 [2024-11-19 13:06:04.854528] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:01.714 [2024-11-19 13:06:04.854533] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:01.714 [2024-11-19 13:06:04.854538] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:01.714 [2024-11-19 13:06:04.855404] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:01.714 [2024-11-19 13:06:04.856406] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:01.714 [2024-11-19 13:06:04.857412] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:01.714 [2024-11-19 13:06:04.858413] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:01.714 [2024-11-19 13:06:04.858488] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:01.714 [2024-11-19 13:06:04.859421] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:01.714 [2024-11-19 13:06:04.859428] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:01.714 [2024-11-19 13:06:04.859432] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:01.714 [2024-11-19 13:06:04.859449] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:01.714 [2024-11-19 13:06:04.859456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:01.714 [2024-11-19 13:06:04.859470] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:01.714 [2024-11-19 13:06:04.859475] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:01.714 [2024-11-19 13:06:04.859480] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:01.714 [2024-11-19 13:06:04.859493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:01.714 [2024-11-19 13:06:04.859534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:01.714 [2024-11-19 13:06:04.859544] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:01.714 [2024-11-19 13:06:04.859548] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:01.714 [2024-11-19 13:06:04.859552] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:01.714 [2024-11-19 13:06:04.859556] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:01.714 [2024-11-19 13:06:04.859562] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:01.714 [2024-11-19 13:06:04.859566] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:01.714 [2024-11-19 13:06:04.859571] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:01.714 [2024-11-19 13:06:04.859580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:01.714 [2024-11-19 13:06:04.859589] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:01.714 [2024-11-19 13:06:04.859601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:01.714 [2024-11-19 13:06:04.859611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.714 [2024-11-19 13:06:04.859618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.714 [2024-11-19 13:06:04.859626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.714 [2024-11-19 13:06:04.859633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.714 [2024-11-19 13:06:04.859637] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:01.714 [2024-11-19 13:06:04.859643] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:01.714 [2024-11-19 13:06:04.859651] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:01.714 [2024-11-19 13:06:04.859660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:01.714 [2024-11-19 13:06:04.859667] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:01.714 [2024-11-19 13:06:04.859672] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:01.714 [2024-11-19 13:06:04.859677] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:01.714 [2024-11-19 13:06:04.859683] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:01.714 [2024-11-19 13:06:04.859692] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:01.714 [2024-11-19 13:06:04.859700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:01.714 [2024-11-19 13:06:04.859751] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:01.714 [2024-11-19 13:06:04.859759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:01.714 [2024-11-19 13:06:04.859766] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:01.714 [2024-11-19 13:06:04.859770] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:01.714 [2024-11-19 13:06:04.859773] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:01.714 [2024-11-19 13:06:04.859778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:01.714 [2024-11-19 13:06:04.859788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:01.714 [2024-11-19 13:06:04.859796] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:01.714 [2024-11-19 13:06:04.859807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:01.714 [2024-11-19 13:06:04.859814] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:01.714 [2024-11-19 13:06:04.859820] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:01.714 [2024-11-19 13:06:04.859824] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:01.714 [2024-11-19 13:06:04.859827] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:01.714 [2024-11-19 13:06:04.859832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:01.714 [2024-11-19 13:06:04.859851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:01.714 [2024-11-19 13:06:04.859862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:01.714 [2024-11-19 13:06:04.859869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:01.714 [2024-11-19 13:06:04.859875] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:01.714 [2024-11-19 13:06:04.859879] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:01.714 [2024-11-19 13:06:04.859882] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:01.714 [2024-11-19 13:06:04.859887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:01.714 [2024-11-19 13:06:04.859900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:01.714 [2024-11-19 13:06:04.859907] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:01.714 [2024-11-19 13:06:04.859913] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:01.714 [2024-11-19 13:06:04.859920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:01.714 [2024-11-19 13:06:04.859926] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:01.714 [2024-11-19 13:06:04.859931] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:01.714 [2024-11-19 13:06:04.859935] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:01.714 [2024-11-19 13:06:04.859940] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:01.714 [2024-11-19 13:06:04.859944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:01.714 [2024-11-19 13:06:04.859953] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:01.714 [2024-11-19 13:06:04.859969] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:01.714 [2024-11-19 13:06:04.859978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:01.714 [2024-11-19 13:06:04.859988] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:01.714 [2024-11-19 13:06:04.859998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:01.714 [2024-11-19 13:06:04.860008] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:01.715 [2024-11-19 13:06:04.860016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:01.715 [2024-11-19 13:06:04.860026] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:01.715 [2024-11-19 13:06:04.860036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:01.715 [2024-11-19 13:06:04.860047] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:01.715 [2024-11-19 13:06:04.860052] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:01.715 [2024-11-19 13:06:04.860055] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:01.715 [2024-11-19 13:06:04.860058] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:01.715 [2024-11-19 13:06:04.860061] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:01.715 [2024-11-19 13:06:04.860067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:01.715 [2024-11-19 13:06:04.860073] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:01.715 [2024-11-19 13:06:04.860077] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:01.715 [2024-11-19 13:06:04.860080] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:01.715 [2024-11-19 13:06:04.860085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:01.715 [2024-11-19 13:06:04.860092] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:01.715 [2024-11-19 13:06:04.860095] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:01.715 [2024-11-19 13:06:04.860098] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:01.715 [2024-11-19 13:06:04.860106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:01.715 [2024-11-19 13:06:04.860113] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:01.715 [2024-11-19 13:06:04.860117] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:01.715 [2024-11-19 13:06:04.860120] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:01.715 [2024-11-19 13:06:04.860125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:01.715 [2024-11-19 13:06:04.860131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:01.715 [2024-11-19 13:06:04.860143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:01.715 [2024-11-19 13:06:04.860152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:01.715 [2024-11-19 13:06:04.860159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:01.715 ===================================================== 00:14:01.715 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:01.715 ===================================================== 00:14:01.715 Controller Capabilities/Features 00:14:01.715 ================================ 00:14:01.715 Vendor ID: 4e58 00:14:01.715 Subsystem Vendor ID: 4e58 00:14:01.715 Serial Number: SPDK1 00:14:01.715 Model Number: SPDK bdev Controller 00:14:01.715 Firmware Version: 25.01 00:14:01.715 Recommended Arb Burst: 6 00:14:01.715 IEEE OUI Identifier: 8d 6b 50 00:14:01.715 Multi-path I/O 00:14:01.715 May have multiple subsystem ports: Yes 00:14:01.715 May have multiple controllers: Yes 00:14:01.715 Associated with SR-IOV VF: No 00:14:01.715 Max Data Transfer Size: 131072 00:14:01.715 Max Number of Namespaces: 32 00:14:01.715 Max Number of I/O Queues: 127 00:14:01.715 NVMe Specification Version (VS): 1.3 00:14:01.715 NVMe Specification Version (Identify): 1.3 00:14:01.715 Maximum Queue Entries: 256 00:14:01.715 Contiguous Queues Required: Yes 00:14:01.715 Arbitration Mechanisms Supported 00:14:01.715 Weighted Round Robin: Not Supported 00:14:01.715 Vendor Specific: Not Supported 00:14:01.715 Reset Timeout: 15000 ms 00:14:01.715 Doorbell Stride: 4 bytes 00:14:01.715 NVM Subsystem Reset: Not Supported 00:14:01.715 Command Sets Supported 00:14:01.715 NVM Command Set: Supported 00:14:01.715 Boot Partition: Not Supported 00:14:01.715 Memory Page Size Minimum: 4096 bytes 00:14:01.715 Memory Page Size Maximum: 4096 bytes 00:14:01.715 Persistent Memory Region: Not Supported 00:14:01.715 Optional Asynchronous Events Supported 00:14:01.715 Namespace Attribute Notices: Supported 00:14:01.715 Firmware Activation Notices: Not Supported 00:14:01.715 ANA Change Notices: Not Supported 00:14:01.715 PLE Aggregate Log Change Notices: Not Supported 00:14:01.715 LBA Status Info Alert Notices: Not Supported 00:14:01.715 EGE Aggregate Log Change Notices: Not Supported 00:14:01.715 Normal NVM Subsystem Shutdown event: Not Supported 00:14:01.715 Zone Descriptor Change Notices: Not Supported 00:14:01.715 Discovery Log Change Notices: Not Supported 00:14:01.715 Controller Attributes 00:14:01.715 128-bit Host Identifier: Supported 00:14:01.715 Non-Operational Permissive Mode: Not Supported 00:14:01.715 NVM Sets: Not Supported 00:14:01.715 Read Recovery Levels: Not Supported 00:14:01.715 Endurance Groups: Not Supported 00:14:01.715 Predictable Latency Mode: Not Supported 00:14:01.715 Traffic Based Keep ALive: Not Supported 00:14:01.715 Namespace Granularity: Not Supported 00:14:01.715 SQ Associations: Not Supported 00:14:01.715 UUID List: Not Supported 00:14:01.715 Multi-Domain Subsystem: Not Supported 00:14:01.715 Fixed Capacity Management: Not Supported 00:14:01.715 Variable Capacity Management: Not Supported 00:14:01.715 Delete Endurance Group: Not Supported 00:14:01.715 Delete NVM Set: Not Supported 00:14:01.715 Extended LBA Formats Supported: Not Supported 00:14:01.715 Flexible Data Placement Supported: Not Supported 00:14:01.715 00:14:01.715 Controller Memory Buffer Support 00:14:01.715 ================================ 00:14:01.715 Supported: No 00:14:01.715 00:14:01.715 Persistent Memory Region Support 00:14:01.715 ================================ 00:14:01.715 Supported: No 00:14:01.715 00:14:01.715 Admin Command Set Attributes 00:14:01.715 ============================ 00:14:01.715 Security Send/Receive: Not Supported 00:14:01.715 Format NVM: Not Supported 00:14:01.715 Firmware Activate/Download: Not Supported 00:14:01.715 Namespace Management: Not Supported 00:14:01.715 Device Self-Test: Not Supported 00:14:01.715 Directives: Not Supported 00:14:01.715 NVMe-MI: Not Supported 00:14:01.715 Virtualization Management: Not Supported 00:14:01.715 Doorbell Buffer Config: Not Supported 00:14:01.715 Get LBA Status Capability: Not Supported 00:14:01.715 Command & Feature Lockdown Capability: Not Supported 00:14:01.715 Abort Command Limit: 4 00:14:01.715 Async Event Request Limit: 4 00:14:01.715 Number of Firmware Slots: N/A 00:14:01.715 Firmware Slot 1 Read-Only: N/A 00:14:01.715 Firmware Activation Without Reset: N/A 00:14:01.715 Multiple Update Detection Support: N/A 00:14:01.715 Firmware Update Granularity: No Information Provided 00:14:01.715 Per-Namespace SMART Log: No 00:14:01.715 Asymmetric Namespace Access Log Page: Not Supported 00:14:01.715 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:01.715 Command Effects Log Page: Supported 00:14:01.715 Get Log Page Extended Data: Supported 00:14:01.715 Telemetry Log Pages: Not Supported 00:14:01.715 Persistent Event Log Pages: Not Supported 00:14:01.715 Supported Log Pages Log Page: May Support 00:14:01.715 Commands Supported & Effects Log Page: Not Supported 00:14:01.715 Feature Identifiers & Effects Log Page:May Support 00:14:01.715 NVMe-MI Commands & Effects Log Page: May Support 00:14:01.715 Data Area 4 for Telemetry Log: Not Supported 00:14:01.715 Error Log Page Entries Supported: 128 00:14:01.715 Keep Alive: Supported 00:14:01.715 Keep Alive Granularity: 10000 ms 00:14:01.715 00:14:01.715 NVM Command Set Attributes 00:14:01.715 ========================== 00:14:01.715 Submission Queue Entry Size 00:14:01.715 Max: 64 00:14:01.715 Min: 64 00:14:01.715 Completion Queue Entry Size 00:14:01.715 Max: 16 00:14:01.715 Min: 16 00:14:01.715 Number of Namespaces: 32 00:14:01.715 Compare Command: Supported 00:14:01.715 Write Uncorrectable Command: Not Supported 00:14:01.715 Dataset Management Command: Supported 00:14:01.715 Write Zeroes Command: Supported 00:14:01.715 Set Features Save Field: Not Supported 00:14:01.715 Reservations: Not Supported 00:14:01.715 Timestamp: Not Supported 00:14:01.715 Copy: Supported 00:14:01.715 Volatile Write Cache: Present 00:14:01.715 Atomic Write Unit (Normal): 1 00:14:01.715 Atomic Write Unit (PFail): 1 00:14:01.715 Atomic Compare & Write Unit: 1 00:14:01.715 Fused Compare & Write: Supported 00:14:01.715 Scatter-Gather List 00:14:01.715 SGL Command Set: Supported (Dword aligned) 00:14:01.715 SGL Keyed: Not Supported 00:14:01.715 SGL Bit Bucket Descriptor: Not Supported 00:14:01.715 SGL Metadata Pointer: Not Supported 00:14:01.716 Oversized SGL: Not Supported 00:14:01.716 SGL Metadata Address: Not Supported 00:14:01.716 SGL Offset: Not Supported 00:14:01.716 Transport SGL Data Block: Not Supported 00:14:01.716 Replay Protected Memory Block: Not Supported 00:14:01.716 00:14:01.716 Firmware Slot Information 00:14:01.716 ========================= 00:14:01.716 Active slot: 1 00:14:01.716 Slot 1 Firmware Revision: 25.01 00:14:01.716 00:14:01.716 00:14:01.716 Commands Supported and Effects 00:14:01.716 ============================== 00:14:01.716 Admin Commands 00:14:01.716 -------------- 00:14:01.716 Get Log Page (02h): Supported 00:14:01.716 Identify (06h): Supported 00:14:01.716 Abort (08h): Supported 00:14:01.716 Set Features (09h): Supported 00:14:01.716 Get Features (0Ah): Supported 00:14:01.716 Asynchronous Event Request (0Ch): Supported 00:14:01.716 Keep Alive (18h): Supported 00:14:01.716 I/O Commands 00:14:01.716 ------------ 00:14:01.716 Flush (00h): Supported LBA-Change 00:14:01.716 Write (01h): Supported LBA-Change 00:14:01.716 Read (02h): Supported 00:14:01.716 Compare (05h): Supported 00:14:01.716 Write Zeroes (08h): Supported LBA-Change 00:14:01.716 Dataset Management (09h): Supported LBA-Change 00:14:01.716 Copy (19h): Supported LBA-Change 00:14:01.716 00:14:01.716 Error Log 00:14:01.716 ========= 00:14:01.716 00:14:01.716 Arbitration 00:14:01.716 =========== 00:14:01.716 Arbitration Burst: 1 00:14:01.716 00:14:01.716 Power Management 00:14:01.716 ================ 00:14:01.716 Number of Power States: 1 00:14:01.716 Current Power State: Power State #0 00:14:01.716 Power State #0: 00:14:01.716 Max Power: 0.00 W 00:14:01.716 Non-Operational State: Operational 00:14:01.716 Entry Latency: Not Reported 00:14:01.716 Exit Latency: Not Reported 00:14:01.716 Relative Read Throughput: 0 00:14:01.716 Relative Read Latency: 0 00:14:01.716 Relative Write Throughput: 0 00:14:01.716 Relative Write Latency: 0 00:14:01.716 Idle Power: Not Reported 00:14:01.716 Active Power: Not Reported 00:14:01.716 Non-Operational Permissive Mode: Not Supported 00:14:01.716 00:14:01.716 Health Information 00:14:01.716 ================== 00:14:01.716 Critical Warnings: 00:14:01.716 Available Spare Space: OK 00:14:01.716 Temperature: OK 00:14:01.716 Device Reliability: OK 00:14:01.716 Read Only: No 00:14:01.716 Volatile Memory Backup: OK 00:14:01.716 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:01.716 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:01.716 Available Spare: 0% 00:14:01.716 Available Sp[2024-11-19 13:06:04.860243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:01.716 [2024-11-19 13:06:04.860256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:01.716 [2024-11-19 13:06:04.860280] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:01.716 [2024-11-19 13:06:04.860289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.716 [2024-11-19 13:06:04.860295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.716 [2024-11-19 13:06:04.860300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.716 [2024-11-19 13:06:04.860306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.716 [2024-11-19 13:06:04.862956] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:01.716 [2024-11-19 13:06:04.862967] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:01.716 [2024-11-19 13:06:04.863447] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:01.716 [2024-11-19 13:06:04.863496] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:01.716 [2024-11-19 13:06:04.863502] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:01.716 [2024-11-19 13:06:04.864458] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:01.716 [2024-11-19 13:06:04.864469] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:01.716 [2024-11-19 13:06:04.864518] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:01.716 [2024-11-19 13:06:04.866490] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:01.716 are Threshold: 0% 00:14:01.716 Life Percentage Used: 0% 00:14:01.716 Data Units Read: 0 00:14:01.716 Data Units Written: 0 00:14:01.716 Host Read Commands: 0 00:14:01.716 Host Write Commands: 0 00:14:01.716 Controller Busy Time: 0 minutes 00:14:01.716 Power Cycles: 0 00:14:01.716 Power On Hours: 0 hours 00:14:01.716 Unsafe Shutdowns: 0 00:14:01.716 Unrecoverable Media Errors: 0 00:14:01.716 Lifetime Error Log Entries: 0 00:14:01.716 Warning Temperature Time: 0 minutes 00:14:01.716 Critical Temperature Time: 0 minutes 00:14:01.716 00:14:01.716 Number of Queues 00:14:01.716 ================ 00:14:01.716 Number of I/O Submission Queues: 127 00:14:01.716 Number of I/O Completion Queues: 127 00:14:01.716 00:14:01.716 Active Namespaces 00:14:01.716 ================= 00:14:01.716 Namespace ID:1 00:14:01.716 Error Recovery Timeout: Unlimited 00:14:01.716 Command Set Identifier: NVM (00h) 00:14:01.716 Deallocate: Supported 00:14:01.716 Deallocated/Unwritten Error: Not Supported 00:14:01.716 Deallocated Read Value: Unknown 00:14:01.716 Deallocate in Write Zeroes: Not Supported 00:14:01.716 Deallocated Guard Field: 0xFFFF 00:14:01.716 Flush: Supported 00:14:01.716 Reservation: Supported 00:14:01.716 Namespace Sharing Capabilities: Multiple Controllers 00:14:01.716 Size (in LBAs): 131072 (0GiB) 00:14:01.716 Capacity (in LBAs): 131072 (0GiB) 00:14:01.716 Utilization (in LBAs): 131072 (0GiB) 00:14:01.716 NGUID: D305B95399C44E6B8293069CBD63FF5B 00:14:01.716 UUID: d305b953-99c4-4e6b-8293-069cbd63ff5b 00:14:01.716 Thin Provisioning: Not Supported 00:14:01.716 Per-NS Atomic Units: Yes 00:14:01.716 Atomic Boundary Size (Normal): 0 00:14:01.716 Atomic Boundary Size (PFail): 0 00:14:01.716 Atomic Boundary Offset: 0 00:14:01.716 Maximum Single Source Range Length: 65535 00:14:01.716 Maximum Copy Length: 65535 00:14:01.716 Maximum Source Range Count: 1 00:14:01.716 NGUID/EUI64 Never Reused: No 00:14:01.716 Namespace Write Protected: No 00:14:01.716 Number of LBA Formats: 1 00:14:01.716 Current LBA Format: LBA Format #00 00:14:01.716 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:01.716 00:14:01.716 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:01.974 [2024-11-19 13:06:05.092746] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:07.239 Initializing NVMe Controllers 00:14:07.239 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:07.239 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:07.239 Initialization complete. Launching workers. 00:14:07.239 ======================================================== 00:14:07.239 Latency(us) 00:14:07.239 Device Information : IOPS MiB/s Average min max 00:14:07.239 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39945.29 156.04 3204.20 972.27 8598.00 00:14:07.239 ======================================================== 00:14:07.239 Total : 39945.29 156.04 3204.20 972.27 8598.00 00:14:07.239 00:14:07.239 [2024-11-19 13:06:10.116848] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:07.239 13:06:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:07.239 [2024-11-19 13:06:10.351930] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:12.509 Initializing NVMe Controllers 00:14:12.509 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:12.509 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:12.509 Initialization complete. Launching workers. 00:14:12.509 ======================================================== 00:14:12.509 Latency(us) 00:14:12.509 Device Information : IOPS MiB/s Average min max 00:14:12.509 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16070.00 62.77 7975.20 6008.45 15459.67 00:14:12.509 ======================================================== 00:14:12.509 Total : 16070.00 62.77 7975.20 6008.45 15459.67 00:14:12.509 00:14:12.509 [2024-11-19 13:06:15.390213] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:12.509 13:06:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:12.509 [2024-11-19 13:06:15.608226] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:17.782 [2024-11-19 13:06:20.680289] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:17.782 Initializing NVMe Controllers 00:14:17.782 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:17.782 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:17.782 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:17.782 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:17.782 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:17.782 Initialization complete. Launching workers. 00:14:17.782 Starting thread on core 2 00:14:17.782 Starting thread on core 3 00:14:17.782 Starting thread on core 1 00:14:17.782 13:06:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:17.782 [2024-11-19 13:06:20.987335] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:21.070 [2024-11-19 13:06:24.053249] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:21.070 Initializing NVMe Controllers 00:14:21.070 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:21.070 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:21.070 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:21.070 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:21.070 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:21.070 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:21.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:21.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:21.071 Initialization complete. Launching workers. 00:14:21.071 Starting thread on core 1 with urgent priority queue 00:14:21.071 Starting thread on core 2 with urgent priority queue 00:14:21.071 Starting thread on core 3 with urgent priority queue 00:14:21.071 Starting thread on core 0 with urgent priority queue 00:14:21.071 SPDK bdev Controller (SPDK1 ) core 0: 9010.33 IO/s 11.10 secs/100000 ios 00:14:21.071 SPDK bdev Controller (SPDK1 ) core 1: 8221.67 IO/s 12.16 secs/100000 ios 00:14:21.071 SPDK bdev Controller (SPDK1 ) core 2: 8823.67 IO/s 11.33 secs/100000 ios 00:14:21.071 SPDK bdev Controller (SPDK1 ) core 3: 8072.00 IO/s 12.39 secs/100000 ios 00:14:21.071 ======================================================== 00:14:21.071 00:14:21.071 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:21.071 [2024-11-19 13:06:24.348400] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:21.071 Initializing NVMe Controllers 00:14:21.071 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:21.071 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:21.071 Namespace ID: 1 size: 0GB 00:14:21.071 Initialization complete. 00:14:21.071 INFO: using host memory buffer for IO 00:14:21.071 Hello world! 00:14:21.071 [2024-11-19 13:06:24.382645] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:21.071 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:21.329 [2024-11-19 13:06:24.663610] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:22.704 Initializing NVMe Controllers 00:14:22.704 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:22.704 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:22.704 Initialization complete. Launching workers. 00:14:22.704 submit (in ns) avg, min, max = 7122.8, 3283.5, 3998708.7 00:14:22.704 complete (in ns) avg, min, max = 18389.4, 1800.9, 3998685.2 00:14:22.704 00:14:22.704 Submit histogram 00:14:22.704 ================ 00:14:22.704 Range in us Cumulative Count 00:14:22.704 3.283 - 3.297: 0.0486% ( 8) 00:14:22.704 3.297 - 3.311: 0.1093% ( 10) 00:14:22.704 3.311 - 3.325: 0.5829% ( 78) 00:14:22.704 3.325 - 3.339: 3.0483% ( 406) 00:14:22.704 3.339 - 3.353: 8.3009% ( 865) 00:14:22.704 3.353 - 3.367: 14.3248% ( 992) 00:14:22.704 3.367 - 3.381: 20.4336% ( 1006) 00:14:22.704 3.381 - 3.395: 27.5140% ( 1166) 00:14:22.704 3.395 - 3.409: 33.3070% ( 954) 00:14:22.704 3.409 - 3.423: 38.5111% ( 857) 00:14:22.704 3.423 - 3.437: 44.0612% ( 914) 00:14:22.704 3.437 - 3.450: 48.4090% ( 716) 00:14:22.704 3.450 - 3.464: 52.3622% ( 651) 00:14:22.704 3.464 - 3.478: 56.8436% ( 738) 00:14:22.704 3.478 - 3.492: 64.3794% ( 1241) 00:14:22.704 3.492 - 3.506: 69.9721% ( 921) 00:14:22.704 3.506 - 3.520: 74.0163% ( 666) 00:14:22.704 3.520 - 3.534: 79.3175% ( 873) 00:14:22.704 3.534 - 3.548: 83.1370% ( 629) 00:14:22.704 3.548 - 3.562: 85.1530% ( 332) 00:14:22.704 3.562 - 3.590: 87.0476% ( 312) 00:14:22.704 3.590 - 3.617: 87.5273% ( 79) 00:14:22.704 3.617 - 3.645: 88.6629% ( 187) 00:14:22.704 3.645 - 3.673: 90.3935% ( 285) 00:14:22.704 3.673 - 3.701: 92.1059% ( 282) 00:14:22.704 3.701 - 3.729: 93.8305% ( 284) 00:14:22.704 3.729 - 3.757: 95.7068% ( 309) 00:14:22.704 3.757 - 3.784: 97.1946% ( 245) 00:14:22.704 3.784 - 3.812: 98.3969% ( 198) 00:14:22.704 3.812 - 3.840: 98.9920% ( 98) 00:14:22.704 3.840 - 3.868: 99.3745% ( 63) 00:14:22.704 3.868 - 3.896: 99.5810% ( 34) 00:14:22.704 3.896 - 3.923: 99.6235% ( 7) 00:14:22.704 3.923 - 3.951: 99.6417% ( 3) 00:14:22.704 5.037 - 5.064: 99.6478% ( 1) 00:14:22.704 5.092 - 5.120: 99.6539% ( 1) 00:14:22.704 5.148 - 5.176: 99.6599% ( 1) 00:14:22.704 5.203 - 5.231: 99.6721% ( 2) 00:14:22.704 5.259 - 5.287: 99.6782% ( 1) 00:14:22.704 5.287 - 5.315: 99.6842% ( 1) 00:14:22.704 5.315 - 5.343: 99.7025% ( 3) 00:14:22.704 5.343 - 5.370: 99.7085% ( 1) 00:14:22.704 5.426 - 5.454: 99.7146% ( 1) 00:14:22.704 5.565 - 5.593: 99.7207% ( 1) 00:14:22.704 5.593 - 5.621: 99.7389% ( 3) 00:14:22.704 5.677 - 5.704: 99.7450% ( 1) 00:14:22.704 5.704 - 5.732: 99.7510% ( 1) 00:14:22.704 5.760 - 5.788: 99.7632% ( 2) 00:14:22.704 5.788 - 5.816: 99.7692% ( 1) 00:14:22.704 5.843 - 5.871: 99.7753% ( 1) 00:14:22.704 5.983 - 6.010: 99.7814% ( 1) 00:14:22.704 6.094 - 6.122: 99.7875% ( 1) 00:14:22.704 6.150 - 6.177: 99.7935% ( 1) 00:14:22.704 6.261 - 6.289: 99.7996% ( 1) 00:14:22.704 6.289 - 6.317: 99.8057% ( 1) 00:14:22.704 6.344 - 6.372: 99.8118% ( 1) 00:14:22.704 6.428 - 6.456: 99.8178% ( 1) 00:14:22.704 6.567 - 6.595: 99.8239% ( 1) 00:14:22.704 6.929 - 6.957: 99.8300% ( 1) 00:14:22.704 7.068 - 7.096: 99.8421% ( 2) 00:14:22.704 7.402 - 7.457: 99.8482% ( 1) 00:14:22.705 7.513 - 7.569: 99.8543% ( 1) 00:14:22.705 7.569 - 7.624: 99.8603% ( 1) 00:14:22.705 7.791 - 7.847: 99.8664% ( 1) 00:14:22.705 7.847 - 7.903: 99.8786% ( 2) 00:14:22.705 8.014 - 8.070: 99.8846% ( 1) 00:14:22.705 8.292 - 8.348: 99.8907% ( 1) 00:14:22.705 8.348 - 8.403: 99.8968% ( 1) 00:14:22.705 9.071 - 9.127: 99.9028% ( 1) 00:14:22.705 40.737 - 40.960: 99.9089% ( 1) 00:14:22.705 3989.148 - 4017.642: 100.0000% ( 15) 00:14:22.705 00:14:22.705 Complete histogram 00:14:22.705 ================== 00:14:22.705 Range in us Cumulative Count 00:14:22.705 1.795 - 1.809: 0.0061% ( 1) 00:14:22.705 1.809 - 1.823: 0.0425% ( 6) 00:14:22.705 1.823 - 1.837: 0.2308% ( 31) 00:14:22.705 1.837 - 1.850: 1.4331% ( 198) 00:14:22.705 1.850 - 1.864: 3.7467% ( 381) 00:14:22.705 1.864 - 1.878: 40.2538% ( 6012) 00:14:22.705 1.878 - [2024-11-19 13:06:25.682549] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:22.705 1.892: 80.4105% ( 6613) 00:14:22.705 1.892 - 1.906: 89.3612% ( 1474) 00:14:22.705 1.906 - 1.920: 93.9883% ( 762) 00:14:22.705 1.920 - 1.934: 94.9721% ( 162) 00:14:22.705 1.934 - 1.948: 96.4234% ( 239) 00:14:22.705 1.948 - 1.962: 98.3787% ( 322) 00:14:22.705 1.962 - 1.976: 99.2045% ( 136) 00:14:22.705 1.976 - 1.990: 99.3503% ( 24) 00:14:22.705 1.990 - 2.003: 99.3988% ( 8) 00:14:22.705 2.017 - 2.031: 99.4110% ( 2) 00:14:22.705 2.031 - 2.045: 99.4231% ( 2) 00:14:22.705 2.115 - 2.129: 99.4292% ( 1) 00:14:22.705 2.337 - 2.351: 99.4353% ( 1) 00:14:22.705 3.520 - 3.534: 99.4413% ( 1) 00:14:22.705 3.729 - 3.757: 99.4474% ( 1) 00:14:22.705 3.757 - 3.784: 99.4535% ( 1) 00:14:22.705 3.868 - 3.896: 99.4656% ( 2) 00:14:22.705 3.923 - 3.951: 99.4778% ( 2) 00:14:22.705 4.007 - 4.035: 99.4838% ( 1) 00:14:22.705 4.035 - 4.063: 99.5081% ( 4) 00:14:22.705 4.202 - 4.230: 99.5142% ( 1) 00:14:22.705 4.257 - 4.285: 99.5203% ( 1) 00:14:22.705 4.786 - 4.814: 99.5264% ( 1) 00:14:22.705 4.842 - 4.870: 99.5446% ( 3) 00:14:22.705 5.231 - 5.259: 99.5506% ( 1) 00:14:22.705 5.259 - 5.287: 99.5567% ( 1) 00:14:22.705 5.343 - 5.370: 99.5628% ( 1) 00:14:22.705 5.370 - 5.398: 99.5689% ( 1) 00:14:22.705 5.704 - 5.732: 99.5749% ( 1) 00:14:22.705 6.205 - 6.233: 99.5810% ( 1) 00:14:22.705 147.812 - 148.703: 99.5871% ( 1) 00:14:22.705 3989.148 - 4017.642: 100.0000% ( 68) 00:14:22.705 00:14:22.705 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:22.705 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:22.705 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:22.705 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:22.705 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:22.705 [ 00:14:22.705 { 00:14:22.705 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:22.705 "subtype": "Discovery", 00:14:22.705 "listen_addresses": [], 00:14:22.705 "allow_any_host": true, 00:14:22.705 "hosts": [] 00:14:22.705 }, 00:14:22.705 { 00:14:22.705 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:22.705 "subtype": "NVMe", 00:14:22.705 "listen_addresses": [ 00:14:22.705 { 00:14:22.705 "trtype": "VFIOUSER", 00:14:22.705 "adrfam": "IPv4", 00:14:22.705 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:22.705 "trsvcid": "0" 00:14:22.705 } 00:14:22.705 ], 00:14:22.705 "allow_any_host": true, 00:14:22.705 "hosts": [], 00:14:22.705 "serial_number": "SPDK1", 00:14:22.705 "model_number": "SPDK bdev Controller", 00:14:22.705 "max_namespaces": 32, 00:14:22.705 "min_cntlid": 1, 00:14:22.705 "max_cntlid": 65519, 00:14:22.705 "namespaces": [ 00:14:22.705 { 00:14:22.705 "nsid": 1, 00:14:22.705 "bdev_name": "Malloc1", 00:14:22.705 "name": "Malloc1", 00:14:22.705 "nguid": "D305B95399C44E6B8293069CBD63FF5B", 00:14:22.705 "uuid": "d305b953-99c4-4e6b-8293-069cbd63ff5b" 00:14:22.705 } 00:14:22.705 ] 00:14:22.705 }, 00:14:22.705 { 00:14:22.705 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:22.705 "subtype": "NVMe", 00:14:22.705 "listen_addresses": [ 00:14:22.705 { 00:14:22.705 "trtype": "VFIOUSER", 00:14:22.705 "adrfam": "IPv4", 00:14:22.705 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:22.705 "trsvcid": "0" 00:14:22.705 } 00:14:22.705 ], 00:14:22.705 "allow_any_host": true, 00:14:22.705 "hosts": [], 00:14:22.705 "serial_number": "SPDK2", 00:14:22.705 "model_number": "SPDK bdev Controller", 00:14:22.705 "max_namespaces": 32, 00:14:22.705 "min_cntlid": 1, 00:14:22.705 "max_cntlid": 65519, 00:14:22.705 "namespaces": [ 00:14:22.705 { 00:14:22.705 "nsid": 1, 00:14:22.705 "bdev_name": "Malloc2", 00:14:22.705 "name": "Malloc2", 00:14:22.705 "nguid": "50E4F0B8090D4A5A8C65F2099DAB440D", 00:14:22.705 "uuid": "50e4f0b8-090d-4a5a-8c65-f2099dab440d" 00:14:22.705 } 00:14:22.705 ] 00:14:22.705 } 00:14:22.705 ] 00:14:22.705 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:22.705 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2809947 00:14:22.705 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:22.705 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:22.705 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:22.705 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:22.705 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:22.705 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:22.706 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:22.706 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:22.964 [2024-11-19 13:06:26.094371] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:22.964 Malloc3 00:14:22.965 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:22.965 [2024-11-19 13:06:26.337118] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:23.223 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:23.223 Asynchronous Event Request test 00:14:23.223 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:23.223 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:23.223 Registering asynchronous event callbacks... 00:14:23.223 Starting namespace attribute notice tests for all controllers... 00:14:23.223 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:23.223 aer_cb - Changed Namespace 00:14:23.223 Cleaning up... 00:14:23.223 [ 00:14:23.223 { 00:14:23.223 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:23.223 "subtype": "Discovery", 00:14:23.223 "listen_addresses": [], 00:14:23.223 "allow_any_host": true, 00:14:23.224 "hosts": [] 00:14:23.224 }, 00:14:23.224 { 00:14:23.224 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:23.224 "subtype": "NVMe", 00:14:23.224 "listen_addresses": [ 00:14:23.224 { 00:14:23.224 "trtype": "VFIOUSER", 00:14:23.224 "adrfam": "IPv4", 00:14:23.224 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:23.224 "trsvcid": "0" 00:14:23.224 } 00:14:23.224 ], 00:14:23.224 "allow_any_host": true, 00:14:23.224 "hosts": [], 00:14:23.224 "serial_number": "SPDK1", 00:14:23.224 "model_number": "SPDK bdev Controller", 00:14:23.224 "max_namespaces": 32, 00:14:23.224 "min_cntlid": 1, 00:14:23.224 "max_cntlid": 65519, 00:14:23.224 "namespaces": [ 00:14:23.224 { 00:14:23.224 "nsid": 1, 00:14:23.224 "bdev_name": "Malloc1", 00:14:23.224 "name": "Malloc1", 00:14:23.224 "nguid": "D305B95399C44E6B8293069CBD63FF5B", 00:14:23.224 "uuid": "d305b953-99c4-4e6b-8293-069cbd63ff5b" 00:14:23.224 }, 00:14:23.224 { 00:14:23.224 "nsid": 2, 00:14:23.224 "bdev_name": "Malloc3", 00:14:23.224 "name": "Malloc3", 00:14:23.224 "nguid": "1394339FA7D44E7782D5D19B6E3DE3A4", 00:14:23.224 "uuid": "1394339f-a7d4-4e77-82d5-d19b6e3de3a4" 00:14:23.224 } 00:14:23.224 ] 00:14:23.224 }, 00:14:23.224 { 00:14:23.224 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:23.224 "subtype": "NVMe", 00:14:23.224 "listen_addresses": [ 00:14:23.224 { 00:14:23.224 "trtype": "VFIOUSER", 00:14:23.224 "adrfam": "IPv4", 00:14:23.224 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:23.224 "trsvcid": "0" 00:14:23.224 } 00:14:23.224 ], 00:14:23.224 "allow_any_host": true, 00:14:23.224 "hosts": [], 00:14:23.224 "serial_number": "SPDK2", 00:14:23.224 "model_number": "SPDK bdev Controller", 00:14:23.224 "max_namespaces": 32, 00:14:23.224 "min_cntlid": 1, 00:14:23.224 "max_cntlid": 65519, 00:14:23.224 "namespaces": [ 00:14:23.224 { 00:14:23.224 "nsid": 1, 00:14:23.224 "bdev_name": "Malloc2", 00:14:23.224 "name": "Malloc2", 00:14:23.224 "nguid": "50E4F0B8090D4A5A8C65F2099DAB440D", 00:14:23.224 "uuid": "50e4f0b8-090d-4a5a-8c65-f2099dab440d" 00:14:23.224 } 00:14:23.224 ] 00:14:23.224 } 00:14:23.224 ] 00:14:23.224 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2809947 00:14:23.224 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:23.224 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:23.224 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:23.224 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:23.224 [2024-11-19 13:06:26.589867] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:14:23.224 [2024-11-19 13:06:26.589916] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2810163 ] 00:14:23.485 [2024-11-19 13:06:26.629741] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:23.485 [2024-11-19 13:06:26.638198] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:23.485 [2024-11-19 13:06:26.638222] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2534d39000 00:14:23.485 [2024-11-19 13:06:26.639190] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:23.485 [2024-11-19 13:06:26.640198] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:23.485 [2024-11-19 13:06:26.641201] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:23.485 [2024-11-19 13:06:26.642203] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:23.485 [2024-11-19 13:06:26.643210] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:23.485 [2024-11-19 13:06:26.644216] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:23.485 [2024-11-19 13:06:26.645229] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:23.485 [2024-11-19 13:06:26.646236] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:23.485 [2024-11-19 13:06:26.647243] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:23.485 [2024-11-19 13:06:26.647253] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2534d2e000 00:14:23.485 [2024-11-19 13:06:26.648192] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:23.485 [2024-11-19 13:06:26.657711] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:23.485 [2024-11-19 13:06:26.657737] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:23.485 [2024-11-19 13:06:26.662825] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:23.485 [2024-11-19 13:06:26.662867] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:23.485 [2024-11-19 13:06:26.662934] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:23.485 [2024-11-19 13:06:26.662951] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:23.485 [2024-11-19 13:06:26.662956] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:23.485 [2024-11-19 13:06:26.663828] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:23.485 [2024-11-19 13:06:26.663837] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:23.485 [2024-11-19 13:06:26.663843] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:23.485 [2024-11-19 13:06:26.664838] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:23.485 [2024-11-19 13:06:26.664847] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:23.485 [2024-11-19 13:06:26.664854] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:23.485 [2024-11-19 13:06:26.665846] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:23.485 [2024-11-19 13:06:26.665855] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:23.485 [2024-11-19 13:06:26.666858] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:23.485 [2024-11-19 13:06:26.666866] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:23.485 [2024-11-19 13:06:26.666871] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:23.485 [2024-11-19 13:06:26.666877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:23.485 [2024-11-19 13:06:26.666985] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:23.485 [2024-11-19 13:06:26.666990] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:23.485 [2024-11-19 13:06:26.666994] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:23.485 [2024-11-19 13:06:26.667866] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:23.485 [2024-11-19 13:06:26.668868] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:23.485 [2024-11-19 13:06:26.669874] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:23.486 [2024-11-19 13:06:26.670881] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:23.486 [2024-11-19 13:06:26.670919] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:23.486 [2024-11-19 13:06:26.671892] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:23.486 [2024-11-19 13:06:26.671901] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:23.486 [2024-11-19 13:06:26.671905] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:23.486 [2024-11-19 13:06:26.671922] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:23.486 [2024-11-19 13:06:26.671929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:23.486 [2024-11-19 13:06:26.671940] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:23.486 [2024-11-19 13:06:26.671945] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:23.486 [2024-11-19 13:06:26.671951] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:23.486 [2024-11-19 13:06:26.671963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:23.486 [2024-11-19 13:06:26.679954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:23.486 [2024-11-19 13:06:26.679964] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:23.486 [2024-11-19 13:06:26.679969] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:23.486 [2024-11-19 13:06:26.679973] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:23.486 [2024-11-19 13:06:26.679978] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:23.486 [2024-11-19 13:06:26.679985] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:23.486 [2024-11-19 13:06:26.679989] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:23.486 [2024-11-19 13:06:26.679994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:23.486 [2024-11-19 13:06:26.680003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:23.486 [2024-11-19 13:06:26.680013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:23.486 [2024-11-19 13:06:26.687953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:23.486 [2024-11-19 13:06:26.687965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:23.486 [2024-11-19 13:06:26.687973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:23.486 [2024-11-19 13:06:26.687980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:23.486 [2024-11-19 13:06:26.687987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:23.486 [2024-11-19 13:06:26.687994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:23.486 [2024-11-19 13:06:26.688001] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:23.486 [2024-11-19 13:06:26.688009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:23.486 [2024-11-19 13:06:26.695954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:23.486 [2024-11-19 13:06:26.695967] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:23.486 [2024-11-19 13:06:26.695973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:23.486 [2024-11-19 13:06:26.695979] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:23.486 [2024-11-19 13:06:26.695985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:23.486 [2024-11-19 13:06:26.695994] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:23.486 [2024-11-19 13:06:26.703952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:23.486 [2024-11-19 13:06:26.704010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:23.486 [2024-11-19 13:06:26.704018] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:23.486 [2024-11-19 13:06:26.704025] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:23.486 [2024-11-19 13:06:26.704030] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:23.486 [2024-11-19 13:06:26.704033] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:23.486 [2024-11-19 13:06:26.704039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:23.486 [2024-11-19 13:06:26.711952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:23.486 [2024-11-19 13:06:26.711963] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:23.486 [2024-11-19 13:06:26.711974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:23.486 [2024-11-19 13:06:26.711981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:23.486 [2024-11-19 13:06:26.711988] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:23.486 [2024-11-19 13:06:26.711992] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:23.486 [2024-11-19 13:06:26.711995] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:23.486 [2024-11-19 13:06:26.712001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:23.486 [2024-11-19 13:06:26.719954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:23.486 [2024-11-19 13:06:26.719969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:23.486 [2024-11-19 13:06:26.719976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:23.486 [2024-11-19 13:06:26.719983] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:23.486 [2024-11-19 13:06:26.719987] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:23.486 [2024-11-19 13:06:26.719991] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:23.486 [2024-11-19 13:06:26.719997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:23.486 [2024-11-19 13:06:26.727951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:23.486 [2024-11-19 13:06:26.727960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:23.486 [2024-11-19 13:06:26.727966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:23.486 [2024-11-19 13:06:26.727974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:23.486 [2024-11-19 13:06:26.727980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:23.486 [2024-11-19 13:06:26.727985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:23.486 [2024-11-19 13:06:26.727989] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:23.487 [2024-11-19 13:06:26.727994] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:23.487 [2024-11-19 13:06:26.727998] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:23.487 [2024-11-19 13:06:26.728003] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:23.487 [2024-11-19 13:06:26.728017] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:23.487 [2024-11-19 13:06:26.735141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:23.487 [2024-11-19 13:06:26.735155] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:23.487 [2024-11-19 13:06:26.743951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:23.487 [2024-11-19 13:06:26.743963] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:23.487 [2024-11-19 13:06:26.751952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:23.487 [2024-11-19 13:06:26.751964] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:23.487 [2024-11-19 13:06:26.759951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:23.487 [2024-11-19 13:06:26.759966] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:23.487 [2024-11-19 13:06:26.759971] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:23.487 [2024-11-19 13:06:26.759977] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:23.487 [2024-11-19 13:06:26.759980] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:23.487 [2024-11-19 13:06:26.759983] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:23.487 [2024-11-19 13:06:26.759989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:23.487 [2024-11-19 13:06:26.759996] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:23.487 [2024-11-19 13:06:26.760000] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:23.487 [2024-11-19 13:06:26.760003] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:23.487 [2024-11-19 13:06:26.760009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:23.487 [2024-11-19 13:06:26.760015] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:23.487 [2024-11-19 13:06:26.760019] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:23.487 [2024-11-19 13:06:26.760022] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:23.487 [2024-11-19 13:06:26.760028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:23.487 [2024-11-19 13:06:26.760034] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:23.487 [2024-11-19 13:06:26.760038] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:23.487 [2024-11-19 13:06:26.760042] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:23.487 [2024-11-19 13:06:26.760047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:23.487 [2024-11-19 13:06:26.767952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:23.487 [2024-11-19 13:06:26.767965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:23.487 [2024-11-19 13:06:26.767975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:23.487 [2024-11-19 13:06:26.767981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:23.487 ===================================================== 00:14:23.487 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:23.487 ===================================================== 00:14:23.487 Controller Capabilities/Features 00:14:23.487 ================================ 00:14:23.487 Vendor ID: 4e58 00:14:23.487 Subsystem Vendor ID: 4e58 00:14:23.487 Serial Number: SPDK2 00:14:23.487 Model Number: SPDK bdev Controller 00:14:23.487 Firmware Version: 25.01 00:14:23.487 Recommended Arb Burst: 6 00:14:23.487 IEEE OUI Identifier: 8d 6b 50 00:14:23.487 Multi-path I/O 00:14:23.487 May have multiple subsystem ports: Yes 00:14:23.487 May have multiple controllers: Yes 00:14:23.487 Associated with SR-IOV VF: No 00:14:23.487 Max Data Transfer Size: 131072 00:14:23.487 Max Number of Namespaces: 32 00:14:23.487 Max Number of I/O Queues: 127 00:14:23.487 NVMe Specification Version (VS): 1.3 00:14:23.487 NVMe Specification Version (Identify): 1.3 00:14:23.487 Maximum Queue Entries: 256 00:14:23.487 Contiguous Queues Required: Yes 00:14:23.487 Arbitration Mechanisms Supported 00:14:23.487 Weighted Round Robin: Not Supported 00:14:23.487 Vendor Specific: Not Supported 00:14:23.487 Reset Timeout: 15000 ms 00:14:23.487 Doorbell Stride: 4 bytes 00:14:23.487 NVM Subsystem Reset: Not Supported 00:14:23.487 Command Sets Supported 00:14:23.487 NVM Command Set: Supported 00:14:23.487 Boot Partition: Not Supported 00:14:23.487 Memory Page Size Minimum: 4096 bytes 00:14:23.487 Memory Page Size Maximum: 4096 bytes 00:14:23.487 Persistent Memory Region: Not Supported 00:14:23.487 Optional Asynchronous Events Supported 00:14:23.487 Namespace Attribute Notices: Supported 00:14:23.487 Firmware Activation Notices: Not Supported 00:14:23.487 ANA Change Notices: Not Supported 00:14:23.487 PLE Aggregate Log Change Notices: Not Supported 00:14:23.487 LBA Status Info Alert Notices: Not Supported 00:14:23.487 EGE Aggregate Log Change Notices: Not Supported 00:14:23.487 Normal NVM Subsystem Shutdown event: Not Supported 00:14:23.487 Zone Descriptor Change Notices: Not Supported 00:14:23.487 Discovery Log Change Notices: Not Supported 00:14:23.487 Controller Attributes 00:14:23.487 128-bit Host Identifier: Supported 00:14:23.487 Non-Operational Permissive Mode: Not Supported 00:14:23.487 NVM Sets: Not Supported 00:14:23.487 Read Recovery Levels: Not Supported 00:14:23.487 Endurance Groups: Not Supported 00:14:23.487 Predictable Latency Mode: Not Supported 00:14:23.487 Traffic Based Keep ALive: Not Supported 00:14:23.487 Namespace Granularity: Not Supported 00:14:23.487 SQ Associations: Not Supported 00:14:23.487 UUID List: Not Supported 00:14:23.487 Multi-Domain Subsystem: Not Supported 00:14:23.487 Fixed Capacity Management: Not Supported 00:14:23.487 Variable Capacity Management: Not Supported 00:14:23.487 Delete Endurance Group: Not Supported 00:14:23.487 Delete NVM Set: Not Supported 00:14:23.487 Extended LBA Formats Supported: Not Supported 00:14:23.487 Flexible Data Placement Supported: Not Supported 00:14:23.487 00:14:23.487 Controller Memory Buffer Support 00:14:23.487 ================================ 00:14:23.487 Supported: No 00:14:23.487 00:14:23.487 Persistent Memory Region Support 00:14:23.487 ================================ 00:14:23.487 Supported: No 00:14:23.487 00:14:23.487 Admin Command Set Attributes 00:14:23.487 ============================ 00:14:23.487 Security Send/Receive: Not Supported 00:14:23.487 Format NVM: Not Supported 00:14:23.487 Firmware Activate/Download: Not Supported 00:14:23.487 Namespace Management: Not Supported 00:14:23.487 Device Self-Test: Not Supported 00:14:23.488 Directives: Not Supported 00:14:23.488 NVMe-MI: Not Supported 00:14:23.488 Virtualization Management: Not Supported 00:14:23.488 Doorbell Buffer Config: Not Supported 00:14:23.488 Get LBA Status Capability: Not Supported 00:14:23.488 Command & Feature Lockdown Capability: Not Supported 00:14:23.488 Abort Command Limit: 4 00:14:23.488 Async Event Request Limit: 4 00:14:23.488 Number of Firmware Slots: N/A 00:14:23.488 Firmware Slot 1 Read-Only: N/A 00:14:23.488 Firmware Activation Without Reset: N/A 00:14:23.488 Multiple Update Detection Support: N/A 00:14:23.488 Firmware Update Granularity: No Information Provided 00:14:23.488 Per-Namespace SMART Log: No 00:14:23.488 Asymmetric Namespace Access Log Page: Not Supported 00:14:23.488 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:23.488 Command Effects Log Page: Supported 00:14:23.488 Get Log Page Extended Data: Supported 00:14:23.488 Telemetry Log Pages: Not Supported 00:14:23.488 Persistent Event Log Pages: Not Supported 00:14:23.488 Supported Log Pages Log Page: May Support 00:14:23.488 Commands Supported & Effects Log Page: Not Supported 00:14:23.488 Feature Identifiers & Effects Log Page:May Support 00:14:23.488 NVMe-MI Commands & Effects Log Page: May Support 00:14:23.488 Data Area 4 for Telemetry Log: Not Supported 00:14:23.488 Error Log Page Entries Supported: 128 00:14:23.488 Keep Alive: Supported 00:14:23.488 Keep Alive Granularity: 10000 ms 00:14:23.488 00:14:23.488 NVM Command Set Attributes 00:14:23.488 ========================== 00:14:23.488 Submission Queue Entry Size 00:14:23.488 Max: 64 00:14:23.488 Min: 64 00:14:23.488 Completion Queue Entry Size 00:14:23.488 Max: 16 00:14:23.488 Min: 16 00:14:23.488 Number of Namespaces: 32 00:14:23.488 Compare Command: Supported 00:14:23.488 Write Uncorrectable Command: Not Supported 00:14:23.488 Dataset Management Command: Supported 00:14:23.488 Write Zeroes Command: Supported 00:14:23.488 Set Features Save Field: Not Supported 00:14:23.488 Reservations: Not Supported 00:14:23.488 Timestamp: Not Supported 00:14:23.488 Copy: Supported 00:14:23.488 Volatile Write Cache: Present 00:14:23.488 Atomic Write Unit (Normal): 1 00:14:23.488 Atomic Write Unit (PFail): 1 00:14:23.488 Atomic Compare & Write Unit: 1 00:14:23.488 Fused Compare & Write: Supported 00:14:23.488 Scatter-Gather List 00:14:23.488 SGL Command Set: Supported (Dword aligned) 00:14:23.488 SGL Keyed: Not Supported 00:14:23.488 SGL Bit Bucket Descriptor: Not Supported 00:14:23.488 SGL Metadata Pointer: Not Supported 00:14:23.488 Oversized SGL: Not Supported 00:14:23.488 SGL Metadata Address: Not Supported 00:14:23.488 SGL Offset: Not Supported 00:14:23.488 Transport SGL Data Block: Not Supported 00:14:23.488 Replay Protected Memory Block: Not Supported 00:14:23.488 00:14:23.488 Firmware Slot Information 00:14:23.488 ========================= 00:14:23.488 Active slot: 1 00:14:23.488 Slot 1 Firmware Revision: 25.01 00:14:23.488 00:14:23.488 00:14:23.488 Commands Supported and Effects 00:14:23.488 ============================== 00:14:23.488 Admin Commands 00:14:23.488 -------------- 00:14:23.488 Get Log Page (02h): Supported 00:14:23.488 Identify (06h): Supported 00:14:23.488 Abort (08h): Supported 00:14:23.488 Set Features (09h): Supported 00:14:23.488 Get Features (0Ah): Supported 00:14:23.488 Asynchronous Event Request (0Ch): Supported 00:14:23.488 Keep Alive (18h): Supported 00:14:23.488 I/O Commands 00:14:23.488 ------------ 00:14:23.488 Flush (00h): Supported LBA-Change 00:14:23.488 Write (01h): Supported LBA-Change 00:14:23.488 Read (02h): Supported 00:14:23.488 Compare (05h): Supported 00:14:23.488 Write Zeroes (08h): Supported LBA-Change 00:14:23.488 Dataset Management (09h): Supported LBA-Change 00:14:23.488 Copy (19h): Supported LBA-Change 00:14:23.488 00:14:23.488 Error Log 00:14:23.488 ========= 00:14:23.488 00:14:23.488 Arbitration 00:14:23.488 =========== 00:14:23.488 Arbitration Burst: 1 00:14:23.488 00:14:23.488 Power Management 00:14:23.488 ================ 00:14:23.488 Number of Power States: 1 00:14:23.488 Current Power State: Power State #0 00:14:23.488 Power State #0: 00:14:23.488 Max Power: 0.00 W 00:14:23.488 Non-Operational State: Operational 00:14:23.488 Entry Latency: Not Reported 00:14:23.488 Exit Latency: Not Reported 00:14:23.488 Relative Read Throughput: 0 00:14:23.488 Relative Read Latency: 0 00:14:23.488 Relative Write Throughput: 0 00:14:23.488 Relative Write Latency: 0 00:14:23.488 Idle Power: Not Reported 00:14:23.488 Active Power: Not Reported 00:14:23.488 Non-Operational Permissive Mode: Not Supported 00:14:23.488 00:14:23.488 Health Information 00:14:23.488 ================== 00:14:23.488 Critical Warnings: 00:14:23.488 Available Spare Space: OK 00:14:23.488 Temperature: OK 00:14:23.488 Device Reliability: OK 00:14:23.488 Read Only: No 00:14:23.488 Volatile Memory Backup: OK 00:14:23.488 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:23.488 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:23.488 Available Spare: 0% 00:14:23.488 Available Sp[2024-11-19 13:06:26.768070] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:23.488 [2024-11-19 13:06:26.775954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:23.488 [2024-11-19 13:06:26.775990] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:23.488 [2024-11-19 13:06:26.775999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:23.488 [2024-11-19 13:06:26.776005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:23.488 [2024-11-19 13:06:26.776010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:23.488 [2024-11-19 13:06:26.776016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:23.488 [2024-11-19 13:06:26.776059] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:23.488 [2024-11-19 13:06:26.776072] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:23.488 [2024-11-19 13:06:26.777063] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:23.488 [2024-11-19 13:06:26.777107] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:23.488 [2024-11-19 13:06:26.777113] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:23.488 [2024-11-19 13:06:26.778065] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:23.488 [2024-11-19 13:06:26.778077] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:23.488 [2024-11-19 13:06:26.778123] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:23.488 [2024-11-19 13:06:26.779102] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:23.488 are Threshold: 0% 00:14:23.488 Life Percentage Used: 0% 00:14:23.488 Data Units Read: 0 00:14:23.488 Data Units Written: 0 00:14:23.488 Host Read Commands: 0 00:14:23.488 Host Write Commands: 0 00:14:23.488 Controller Busy Time: 0 minutes 00:14:23.488 Power Cycles: 0 00:14:23.488 Power On Hours: 0 hours 00:14:23.488 Unsafe Shutdowns: 0 00:14:23.488 Unrecoverable Media Errors: 0 00:14:23.488 Lifetime Error Log Entries: 0 00:14:23.488 Warning Temperature Time: 0 minutes 00:14:23.488 Critical Temperature Time: 0 minutes 00:14:23.488 00:14:23.488 Number of Queues 00:14:23.488 ================ 00:14:23.488 Number of I/O Submission Queues: 127 00:14:23.488 Number of I/O Completion Queues: 127 00:14:23.488 00:14:23.488 Active Namespaces 00:14:23.488 ================= 00:14:23.488 Namespace ID:1 00:14:23.488 Error Recovery Timeout: Unlimited 00:14:23.489 Command Set Identifier: NVM (00h) 00:14:23.489 Deallocate: Supported 00:14:23.489 Deallocated/Unwritten Error: Not Supported 00:14:23.489 Deallocated Read Value: Unknown 00:14:23.489 Deallocate in Write Zeroes: Not Supported 00:14:23.489 Deallocated Guard Field: 0xFFFF 00:14:23.489 Flush: Supported 00:14:23.489 Reservation: Supported 00:14:23.489 Namespace Sharing Capabilities: Multiple Controllers 00:14:23.489 Size (in LBAs): 131072 (0GiB) 00:14:23.489 Capacity (in LBAs): 131072 (0GiB) 00:14:23.489 Utilization (in LBAs): 131072 (0GiB) 00:14:23.489 NGUID: 50E4F0B8090D4A5A8C65F2099DAB440D 00:14:23.489 UUID: 50e4f0b8-090d-4a5a-8c65-f2099dab440d 00:14:23.489 Thin Provisioning: Not Supported 00:14:23.489 Per-NS Atomic Units: Yes 00:14:23.489 Atomic Boundary Size (Normal): 0 00:14:23.489 Atomic Boundary Size (PFail): 0 00:14:23.489 Atomic Boundary Offset: 0 00:14:23.489 Maximum Single Source Range Length: 65535 00:14:23.489 Maximum Copy Length: 65535 00:14:23.489 Maximum Source Range Count: 1 00:14:23.489 NGUID/EUI64 Never Reused: No 00:14:23.489 Namespace Write Protected: No 00:14:23.489 Number of LBA Formats: 1 00:14:23.489 Current LBA Format: LBA Format #00 00:14:23.489 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:23.489 00:14:23.489 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:23.748 [2024-11-19 13:06:27.016508] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:29.012 Initializing NVMe Controllers 00:14:29.012 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:29.012 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:29.012 Initialization complete. Launching workers. 00:14:29.012 ======================================================== 00:14:29.012 Latency(us) 00:14:29.012 Device Information : IOPS MiB/s Average min max 00:14:29.012 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39945.52 156.04 3204.18 970.61 7614.00 00:14:29.012 ======================================================== 00:14:29.012 Total : 39945.52 156.04 3204.18 970.61 7614.00 00:14:29.012 00:14:29.012 [2024-11-19 13:06:32.121222] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:29.012 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:29.012 [2024-11-19 13:06:32.359923] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:34.274 Initializing NVMe Controllers 00:14:34.274 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:34.274 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:34.274 Initialization complete. Launching workers. 00:14:34.274 ======================================================== 00:14:34.274 Latency(us) 00:14:34.274 Device Information : IOPS MiB/s Average min max 00:14:34.274 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39920.56 155.94 3207.34 985.58 10509.22 00:14:34.274 ======================================================== 00:14:34.274 Total : 39920.56 155.94 3207.34 985.58 10509.22 00:14:34.274 00:14:34.274 [2024-11-19 13:06:37.381061] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:34.274 13:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:34.274 [2024-11-19 13:06:37.584463] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:39.539 [2024-11-19 13:06:42.730046] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:39.539 Initializing NVMe Controllers 00:14:39.539 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:39.539 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:39.539 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:39.539 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:39.539 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:39.539 Initialization complete. Launching workers. 00:14:39.539 Starting thread on core 2 00:14:39.539 Starting thread on core 3 00:14:39.539 Starting thread on core 1 00:14:39.540 13:06:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:39.799 [2024-11-19 13:06:43.027375] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:43.082 [2024-11-19 13:06:46.095180] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:43.082 Initializing NVMe Controllers 00:14:43.082 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:43.082 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:43.082 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:43.082 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:43.082 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:43.082 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:43.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:43.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:43.082 Initialization complete. Launching workers. 00:14:43.082 Starting thread on core 1 with urgent priority queue 00:14:43.082 Starting thread on core 2 with urgent priority queue 00:14:43.082 Starting thread on core 3 with urgent priority queue 00:14:43.082 Starting thread on core 0 with urgent priority queue 00:14:43.082 SPDK bdev Controller (SPDK2 ) core 0: 9251.67 IO/s 10.81 secs/100000 ios 00:14:43.082 SPDK bdev Controller (SPDK2 ) core 1: 6262.67 IO/s 15.97 secs/100000 ios 00:14:43.082 SPDK bdev Controller (SPDK2 ) core 2: 7865.67 IO/s 12.71 secs/100000 ios 00:14:43.082 SPDK bdev Controller (SPDK2 ) core 3: 7237.00 IO/s 13.82 secs/100000 ios 00:14:43.082 ======================================================== 00:14:43.082 00:14:43.082 13:06:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:43.082 [2024-11-19 13:06:46.381730] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:43.082 Initializing NVMe Controllers 00:14:43.082 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:43.082 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:43.082 Namespace ID: 1 size: 0GB 00:14:43.082 Initialization complete. 00:14:43.082 INFO: using host memory buffer for IO 00:14:43.082 Hello world! 00:14:43.082 [2024-11-19 13:06:46.391806] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:43.082 13:06:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:43.340 [2024-11-19 13:06:46.673849] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:44.717 Initializing NVMe Controllers 00:14:44.717 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:44.717 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:44.717 Initialization complete. Launching workers. 00:14:44.717 submit (in ns) avg, min, max = 7675.3, 3278.3, 3999211.3 00:14:44.717 complete (in ns) avg, min, max = 18798.0, 1812.2, 3998609.6 00:14:44.717 00:14:44.717 Submit histogram 00:14:44.717 ================ 00:14:44.717 Range in us Cumulative Count 00:14:44.717 3.270 - 3.283: 0.0123% ( 2) 00:14:44.717 3.283 - 3.297: 0.0985% ( 14) 00:14:44.717 3.297 - 3.311: 0.2646% ( 27) 00:14:44.717 3.311 - 3.325: 0.5231% ( 42) 00:14:44.717 3.325 - 3.339: 1.7540% ( 200) 00:14:44.717 3.339 - 3.353: 5.3419% ( 583) 00:14:44.717 3.353 - 3.367: 10.3883% ( 820) 00:14:44.717 3.367 - 3.381: 16.5056% ( 994) 00:14:44.717 3.381 - 3.395: 22.9183% ( 1042) 00:14:44.717 3.395 - 3.409: 28.8756% ( 968) 00:14:44.717 3.409 - 3.423: 33.7375% ( 790) 00:14:44.717 3.423 - 3.437: 38.9562% ( 848) 00:14:44.717 3.437 - 3.450: 44.3227% ( 872) 00:14:44.717 3.450 - 3.464: 48.6368% ( 701) 00:14:44.717 3.464 - 3.478: 52.7909% ( 675) 00:14:44.717 3.478 - 3.492: 57.5235% ( 769) 00:14:44.717 3.492 - 3.506: 63.8562% ( 1029) 00:14:44.717 3.506 - 3.520: 69.3889% ( 899) 00:14:44.717 3.520 - 3.534: 73.9676% ( 744) 00:14:44.717 3.534 - 3.548: 78.9156% ( 804) 00:14:44.717 3.548 - 3.562: 82.8051% ( 632) 00:14:44.717 3.562 - 3.590: 86.3192% ( 571) 00:14:44.717 3.590 - 3.617: 87.3592% ( 169) 00:14:44.717 3.617 - 3.645: 88.1039% ( 121) 00:14:44.717 3.645 - 3.673: 89.6424% ( 250) 00:14:44.717 3.673 - 3.701: 91.3164% ( 272) 00:14:44.717 3.701 - 3.729: 93.2365% ( 312) 00:14:44.717 3.729 - 3.757: 94.9228% ( 274) 00:14:44.717 3.757 - 3.784: 96.6767% ( 285) 00:14:44.717 3.784 - 3.812: 97.9383% ( 205) 00:14:44.717 3.812 - 3.840: 98.6707% ( 119) 00:14:44.717 3.840 - 3.868: 99.1384% ( 76) 00:14:44.717 3.868 - 3.896: 99.4892% ( 57) 00:14:44.717 3.896 - 3.923: 99.5754% ( 14) 00:14:44.717 3.923 - 3.951: 99.5815% ( 1) 00:14:44.717 3.951 - 3.979: 99.5938% ( 2) 00:14:44.717 5.176 - 5.203: 99.6061% ( 2) 00:14:44.717 5.203 - 5.231: 99.6123% ( 1) 00:14:44.717 5.287 - 5.315: 99.6184% ( 1) 00:14:44.717 5.315 - 5.343: 99.6246% ( 1) 00:14:44.717 5.343 - 5.370: 99.6307% ( 1) 00:14:44.717 5.537 - 5.565: 99.6369% ( 1) 00:14:44.717 5.677 - 5.704: 99.6492% ( 2) 00:14:44.717 5.704 - 5.732: 99.6554% ( 1) 00:14:44.717 5.816 - 5.843: 99.6615% ( 1) 00:14:44.717 5.899 - 5.927: 99.6738% ( 2) 00:14:44.717 5.927 - 5.955: 99.6800% ( 1) 00:14:44.717 6.066 - 6.094: 99.6923% ( 2) 00:14:44.717 6.150 - 6.177: 99.7046% ( 2) 00:14:44.717 6.177 - 6.205: 99.7108% ( 1) 00:14:44.717 6.205 - 6.233: 99.7169% ( 1) 00:14:44.717 6.289 - 6.317: 99.7231% ( 1) 00:14:44.717 6.344 - 6.372: 99.7292% ( 1) 00:14:44.717 6.372 - 6.400: 99.7354% ( 1) 00:14:44.717 6.400 - 6.428: 99.7415% ( 1) 00:14:44.717 6.428 - 6.456: 99.7477% ( 1) 00:14:44.717 6.595 - 6.623: 99.7538% ( 1) 00:14:44.717 6.623 - 6.650: 99.7600% ( 1) 00:14:44.717 6.650 - 6.678: 99.7661% ( 1) 00:14:44.717 6.734 - 6.762: 99.7723% ( 1) 00:14:44.717 6.762 - 6.790: 99.7846% ( 2) 00:14:44.717 6.790 - 6.817: 99.7908% ( 1) 00:14:44.717 6.845 - 6.873: 99.7969% ( 1) 00:14:44.717 6.873 - 6.901: 99.8031% ( 1) 00:14:44.717 6.901 - 6.929: 99.8092% ( 1) 00:14:44.717 6.957 - 6.984: 99.8154% ( 1) 00:14:44.717 7.012 - 7.040: 99.8215% ( 1) 00:14:44.717 7.179 - 7.235: 99.8277% ( 1) 00:14:44.717 7.290 - 7.346: 99.8400% ( 2) 00:14:44.717 7.457 - 7.513: 99.8523% ( 2) 00:14:44.717 7.569 - 7.624: 99.8585% ( 1) 00:14:44.717 7.903 - 7.958: 99.8708% ( 2) 00:14:44.717 8.014 - 8.070: 99.8769% ( 1) 00:14:44.717 8.237 - 8.292: 99.8831% ( 1) 00:14:44.717 8.849 - 8.904: 99.8892% ( 1) 00:14:44.717 9.071 - 9.127: 99.8954% ( 1) 00:14:44.717 3989.148 - 4017.642: 100.0000% ( 17) 00:14:44.717 00:14:44.717 Complete histogram 00:14:44.717 ================== 00:14:44.717 Range in us Cumulative Count 00:14:44.717 1.809 - [2024-11-19 13:06:47.765004] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:44.717 1.823: 0.2462% ( 40) 00:14:44.717 1.823 - 1.837: 1.5755% ( 216) 00:14:44.717 1.837 - 1.850: 2.9109% ( 217) 00:14:44.717 1.850 - 1.864: 7.1020% ( 681) 00:14:44.717 1.864 - 1.878: 55.8004% ( 7913) 00:14:44.717 1.878 - 1.892: 90.0979% ( 5573) 00:14:44.717 1.892 - 1.906: 95.4520% ( 870) 00:14:44.717 1.906 - 1.920: 96.7321% ( 208) 00:14:44.717 1.920 - 1.934: 97.2244% ( 80) 00:14:44.717 1.934 - 1.948: 97.9260% ( 114) 00:14:44.717 1.948 - 1.962: 98.8430% ( 149) 00:14:44.717 1.962 - 1.976: 99.2738% ( 70) 00:14:44.717 1.976 - 1.990: 99.3477% ( 12) 00:14:44.717 1.990 - 2.003: 99.3661% ( 3) 00:14:44.717 2.017 - 2.031: 99.3723% ( 1) 00:14:44.717 2.059 - 2.073: 99.3784% ( 1) 00:14:44.717 2.087 - 2.101: 99.3846% ( 1) 00:14:44.717 2.143 - 2.157: 99.3907% ( 1) 00:14:44.717 3.673 - 3.701: 99.3969% ( 1) 00:14:44.718 3.729 - 3.757: 99.4030% ( 1) 00:14:44.718 3.812 - 3.840: 99.4092% ( 1) 00:14:44.718 3.868 - 3.896: 99.4153% ( 1) 00:14:44.718 4.063 - 4.090: 99.4215% ( 1) 00:14:44.718 4.090 - 4.118: 99.4277% ( 1) 00:14:44.718 4.118 - 4.146: 99.4338% ( 1) 00:14:44.718 4.146 - 4.174: 99.4400% ( 1) 00:14:44.718 4.202 - 4.230: 99.4461% ( 1) 00:14:44.718 4.230 - 4.257: 99.4584% ( 2) 00:14:44.718 4.397 - 4.424: 99.4646% ( 1) 00:14:44.718 4.563 - 4.591: 99.4707% ( 1) 00:14:44.718 4.591 - 4.619: 99.4769% ( 1) 00:14:44.718 4.647 - 4.675: 99.4830% ( 1) 00:14:44.718 4.786 - 4.814: 99.4954% ( 2) 00:14:44.718 4.814 - 4.842: 99.5015% ( 1) 00:14:44.718 4.981 - 5.009: 99.5138% ( 2) 00:14:44.718 5.454 - 5.482: 99.5200% ( 1) 00:14:44.718 5.677 - 5.704: 99.5261% ( 1) 00:14:44.718 5.788 - 5.816: 99.5323% ( 1) 00:14:44.718 5.927 - 5.955: 99.5384% ( 1) 00:14:44.718 5.955 - 5.983: 99.5446% ( 1) 00:14:44.718 6.010 - 6.038: 99.5507% ( 1) 00:14:44.718 6.233 - 6.261: 99.5569% ( 1) 00:14:44.718 6.261 - 6.289: 99.5631% ( 1) 00:14:44.718 6.678 - 6.706: 99.5692% ( 1) 00:14:44.718 7.346 - 7.402: 99.5754% ( 1) 00:14:44.718 3205.565 - 3219.812: 99.5815% ( 1) 00:14:44.718 3989.148 - 4017.642: 100.0000% ( 68) 00:14:44.718 00:14:44.718 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:44.718 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:44.718 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:44.718 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:44.718 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:44.718 [ 00:14:44.718 { 00:14:44.718 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:44.718 "subtype": "Discovery", 00:14:44.718 "listen_addresses": [], 00:14:44.718 "allow_any_host": true, 00:14:44.718 "hosts": [] 00:14:44.718 }, 00:14:44.718 { 00:14:44.718 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:44.718 "subtype": "NVMe", 00:14:44.718 "listen_addresses": [ 00:14:44.718 { 00:14:44.718 "trtype": "VFIOUSER", 00:14:44.718 "adrfam": "IPv4", 00:14:44.718 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:44.718 "trsvcid": "0" 00:14:44.718 } 00:14:44.718 ], 00:14:44.718 "allow_any_host": true, 00:14:44.718 "hosts": [], 00:14:44.718 "serial_number": "SPDK1", 00:14:44.718 "model_number": "SPDK bdev Controller", 00:14:44.718 "max_namespaces": 32, 00:14:44.718 "min_cntlid": 1, 00:14:44.718 "max_cntlid": 65519, 00:14:44.718 "namespaces": [ 00:14:44.718 { 00:14:44.718 "nsid": 1, 00:14:44.718 "bdev_name": "Malloc1", 00:14:44.718 "name": "Malloc1", 00:14:44.718 "nguid": "D305B95399C44E6B8293069CBD63FF5B", 00:14:44.718 "uuid": "d305b953-99c4-4e6b-8293-069cbd63ff5b" 00:14:44.718 }, 00:14:44.718 { 00:14:44.718 "nsid": 2, 00:14:44.718 "bdev_name": "Malloc3", 00:14:44.718 "name": "Malloc3", 00:14:44.718 "nguid": "1394339FA7D44E7782D5D19B6E3DE3A4", 00:14:44.718 "uuid": "1394339f-a7d4-4e77-82d5-d19b6e3de3a4" 00:14:44.718 } 00:14:44.718 ] 00:14:44.718 }, 00:14:44.718 { 00:14:44.718 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:44.718 "subtype": "NVMe", 00:14:44.718 "listen_addresses": [ 00:14:44.718 { 00:14:44.718 "trtype": "VFIOUSER", 00:14:44.718 "adrfam": "IPv4", 00:14:44.718 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:44.718 "trsvcid": "0" 00:14:44.718 } 00:14:44.718 ], 00:14:44.718 "allow_any_host": true, 00:14:44.718 "hosts": [], 00:14:44.718 "serial_number": "SPDK2", 00:14:44.718 "model_number": "SPDK bdev Controller", 00:14:44.718 "max_namespaces": 32, 00:14:44.718 "min_cntlid": 1, 00:14:44.718 "max_cntlid": 65519, 00:14:44.718 "namespaces": [ 00:14:44.718 { 00:14:44.718 "nsid": 1, 00:14:44.718 "bdev_name": "Malloc2", 00:14:44.718 "name": "Malloc2", 00:14:44.718 "nguid": "50E4F0B8090D4A5A8C65F2099DAB440D", 00:14:44.718 "uuid": "50e4f0b8-090d-4a5a-8c65-f2099dab440d" 00:14:44.718 } 00:14:44.718 ] 00:14:44.718 } 00:14:44.718 ] 00:14:44.718 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:44.718 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:44.718 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2813610 00:14:44.718 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:44.718 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:44.718 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:44.718 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:44.718 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:44.718 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:44.718 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:44.977 [2024-11-19 13:06:48.168639] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:44.977 Malloc4 00:14:44.977 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:45.235 [2024-11-19 13:06:48.417487] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:45.235 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:45.235 Asynchronous Event Request test 00:14:45.235 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:45.235 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:45.235 Registering asynchronous event callbacks... 00:14:45.235 Starting namespace attribute notice tests for all controllers... 00:14:45.235 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:45.235 aer_cb - Changed Namespace 00:14:45.235 Cleaning up... 00:14:45.494 [ 00:14:45.494 { 00:14:45.494 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:45.494 "subtype": "Discovery", 00:14:45.494 "listen_addresses": [], 00:14:45.494 "allow_any_host": true, 00:14:45.494 "hosts": [] 00:14:45.494 }, 00:14:45.494 { 00:14:45.494 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:45.494 "subtype": "NVMe", 00:14:45.494 "listen_addresses": [ 00:14:45.494 { 00:14:45.494 "trtype": "VFIOUSER", 00:14:45.494 "adrfam": "IPv4", 00:14:45.494 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:45.494 "trsvcid": "0" 00:14:45.494 } 00:14:45.494 ], 00:14:45.494 "allow_any_host": true, 00:14:45.494 "hosts": [], 00:14:45.494 "serial_number": "SPDK1", 00:14:45.494 "model_number": "SPDK bdev Controller", 00:14:45.494 "max_namespaces": 32, 00:14:45.494 "min_cntlid": 1, 00:14:45.494 "max_cntlid": 65519, 00:14:45.494 "namespaces": [ 00:14:45.494 { 00:14:45.494 "nsid": 1, 00:14:45.494 "bdev_name": "Malloc1", 00:14:45.494 "name": "Malloc1", 00:14:45.494 "nguid": "D305B95399C44E6B8293069CBD63FF5B", 00:14:45.494 "uuid": "d305b953-99c4-4e6b-8293-069cbd63ff5b" 00:14:45.494 }, 00:14:45.494 { 00:14:45.494 "nsid": 2, 00:14:45.494 "bdev_name": "Malloc3", 00:14:45.494 "name": "Malloc3", 00:14:45.494 "nguid": "1394339FA7D44E7782D5D19B6E3DE3A4", 00:14:45.494 "uuid": "1394339f-a7d4-4e77-82d5-d19b6e3de3a4" 00:14:45.494 } 00:14:45.494 ] 00:14:45.494 }, 00:14:45.494 { 00:14:45.494 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:45.494 "subtype": "NVMe", 00:14:45.494 "listen_addresses": [ 00:14:45.494 { 00:14:45.494 "trtype": "VFIOUSER", 00:14:45.494 "adrfam": "IPv4", 00:14:45.494 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:45.494 "trsvcid": "0" 00:14:45.494 } 00:14:45.494 ], 00:14:45.494 "allow_any_host": true, 00:14:45.494 "hosts": [], 00:14:45.494 "serial_number": "SPDK2", 00:14:45.494 "model_number": "SPDK bdev Controller", 00:14:45.494 "max_namespaces": 32, 00:14:45.494 "min_cntlid": 1, 00:14:45.494 "max_cntlid": 65519, 00:14:45.494 "namespaces": [ 00:14:45.494 { 00:14:45.494 "nsid": 1, 00:14:45.494 "bdev_name": "Malloc2", 00:14:45.494 "name": "Malloc2", 00:14:45.494 "nguid": "50E4F0B8090D4A5A8C65F2099DAB440D", 00:14:45.494 "uuid": "50e4f0b8-090d-4a5a-8c65-f2099dab440d" 00:14:45.494 }, 00:14:45.494 { 00:14:45.494 "nsid": 2, 00:14:45.494 "bdev_name": "Malloc4", 00:14:45.494 "name": "Malloc4", 00:14:45.494 "nguid": "E900490CE0604AC498C2400B52BFA849", 00:14:45.494 "uuid": "e900490c-e060-4ac4-98c2-400b52bfa849" 00:14:45.494 } 00:14:45.494 ] 00:14:45.494 } 00:14:45.494 ] 00:14:45.494 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2813610 00:14:45.494 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:45.494 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2805592 00:14:45.494 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2805592 ']' 00:14:45.494 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2805592 00:14:45.494 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:45.494 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:45.494 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2805592 00:14:45.494 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:45.494 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:45.494 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2805592' 00:14:45.494 killing process with pid 2805592 00:14:45.494 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2805592 00:14:45.494 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2805592 00:14:45.751 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:45.751 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:45.751 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:45.751 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:45.751 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:45.751 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2813847 00:14:45.751 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2813847' 00:14:45.751 Process pid: 2813847 00:14:45.751 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:45.751 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2813847 00:14:45.751 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2813847 ']' 00:14:45.751 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.751 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:45.751 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.751 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.751 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.751 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:45.751 [2024-11-19 13:06:48.990249] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:45.751 [2024-11-19 13:06:48.991106] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:14:45.751 [2024-11-19 13:06:48.991146] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.751 [2024-11-19 13:06:49.066675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:45.751 [2024-11-19 13:06:49.109077] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.751 [2024-11-19 13:06:49.109116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.751 [2024-11-19 13:06:49.109124] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.751 [2024-11-19 13:06:49.109129] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.751 [2024-11-19 13:06:49.109135] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.751 [2024-11-19 13:06:49.110596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.751 [2024-11-19 13:06:49.110710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.751 [2024-11-19 13:06:49.110816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.751 [2024-11-19 13:06:49.110817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:46.010 [2024-11-19 13:06:49.178686] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:46.010 [2024-11-19 13:06:49.179591] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:46.010 [2024-11-19 13:06:49.179705] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:46.010 [2024-11-19 13:06:49.180023] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:46.010 [2024-11-19 13:06:49.180071] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:46.010 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.010 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:46.010 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:46.946 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:47.205 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:47.205 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:47.205 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:47.205 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:47.205 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:47.463 Malloc1 00:14:47.463 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:47.722 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:47.722 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:47.981 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:47.981 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:47.981 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:48.240 Malloc2 00:14:48.240 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:48.498 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:48.498 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:48.756 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:48.756 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2813847 00:14:48.756 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2813847 ']' 00:14:48.756 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2813847 00:14:48.756 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:48.756 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:48.756 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2813847 00:14:48.756 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:48.756 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:48.756 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2813847' 00:14:48.756 killing process with pid 2813847 00:14:48.756 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2813847 00:14:48.756 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2813847 00:14:49.014 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:49.014 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:49.014 00:14:49.014 real 0m50.909s 00:14:49.014 user 3m16.974s 00:14:49.014 sys 0m3.281s 00:14:49.014 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:49.014 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:49.014 ************************************ 00:14:49.014 END TEST nvmf_vfio_user 00:14:49.014 ************************************ 00:14:49.014 13:06:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:49.014 13:06:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:49.014 13:06:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:49.014 13:06:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:49.014 ************************************ 00:14:49.014 START TEST nvmf_vfio_user_nvme_compliance 00:14:49.014 ************************************ 00:14:49.014 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:49.274 * Looking for test storage... 00:14:49.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:49.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.274 --rc genhtml_branch_coverage=1 00:14:49.274 --rc genhtml_function_coverage=1 00:14:49.274 --rc genhtml_legend=1 00:14:49.274 --rc geninfo_all_blocks=1 00:14:49.274 --rc geninfo_unexecuted_blocks=1 00:14:49.274 00:14:49.274 ' 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:49.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.274 --rc genhtml_branch_coverage=1 00:14:49.274 --rc genhtml_function_coverage=1 00:14:49.274 --rc genhtml_legend=1 00:14:49.274 --rc geninfo_all_blocks=1 00:14:49.274 --rc geninfo_unexecuted_blocks=1 00:14:49.274 00:14:49.274 ' 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:49.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.274 --rc genhtml_branch_coverage=1 00:14:49.274 --rc genhtml_function_coverage=1 00:14:49.274 --rc genhtml_legend=1 00:14:49.274 --rc geninfo_all_blocks=1 00:14:49.274 --rc geninfo_unexecuted_blocks=1 00:14:49.274 00:14:49.274 ' 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:49.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.274 --rc genhtml_branch_coverage=1 00:14:49.274 --rc genhtml_function_coverage=1 00:14:49.274 --rc genhtml_legend=1 00:14:49.274 --rc geninfo_all_blocks=1 00:14:49.274 --rc geninfo_unexecuted_blocks=1 00:14:49.274 00:14:49.274 ' 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:49.274 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:49.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2814450 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2814450' 00:14:49.275 Process pid: 2814450 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2814450 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2814450 ']' 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.275 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:49.275 [2024-11-19 13:06:52.627243] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:14:49.275 [2024-11-19 13:06:52.627295] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.533 [2024-11-19 13:06:52.702496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:49.533 [2024-11-19 13:06:52.742914] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.533 [2024-11-19 13:06:52.742958] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.533 [2024-11-19 13:06:52.742965] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.533 [2024-11-19 13:06:52.742972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.533 [2024-11-19 13:06:52.742978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.533 [2024-11-19 13:06:52.744405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.533 [2024-11-19 13:06:52.744517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.533 [2024-11-19 13:06:52.744519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:49.533 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.533 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:49.533 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:50.466 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:50.466 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:50.466 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:50.466 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.466 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:50.725 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.725 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:50.725 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:50.725 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.725 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:50.725 malloc0 00:14:50.725 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.725 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:50.725 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.725 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:50.725 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.725 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:50.725 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.725 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:50.725 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.725 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:50.725 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.725 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:50.725 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.725 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:50.725 00:14:50.725 00:14:50.725 CUnit - A unit testing framework for C - Version 2.1-3 00:14:50.725 http://cunit.sourceforge.net/ 00:14:50.725 00:14:50.725 00:14:50.725 Suite: nvme_compliance 00:14:50.725 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-19 13:06:54.071372] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:50.725 [2024-11-19 13:06:54.072707] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:50.725 [2024-11-19 13:06:54.072724] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:50.725 [2024-11-19 13:06:54.072731] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:50.725 [2024-11-19 13:06:54.076406] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:50.983 passed 00:14:50.983 Test: admin_identify_ctrlr_verify_fused ...[2024-11-19 13:06:54.153967] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:50.983 [2024-11-19 13:06:54.156984] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:50.983 passed 00:14:50.983 Test: admin_identify_ns ...[2024-11-19 13:06:54.236426] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:50.983 [2024-11-19 13:06:54.299958] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:50.983 [2024-11-19 13:06:54.307966] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:50.983 [2024-11-19 13:06:54.329054] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:50.983 passed 00:14:51.241 Test: admin_get_features_mandatory_features ...[2024-11-19 13:06:54.403293] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:51.241 [2024-11-19 13:06:54.406313] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:51.241 passed 00:14:51.241 Test: admin_get_features_optional_features ...[2024-11-19 13:06:54.486823] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:51.241 [2024-11-19 13:06:54.489842] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:51.241 passed 00:14:51.241 Test: admin_set_features_number_of_queues ...[2024-11-19 13:06:54.568381] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:51.499 [2024-11-19 13:06:54.673041] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:51.499 passed 00:14:51.499 Test: admin_get_log_page_mandatory_logs ...[2024-11-19 13:06:54.748181] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:51.499 [2024-11-19 13:06:54.751199] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:51.499 passed 00:14:51.499 Test: admin_get_log_page_with_lpo ...[2024-11-19 13:06:54.829975] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:51.805 [2024-11-19 13:06:54.899956] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:51.805 [2024-11-19 13:06:54.913011] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:51.805 passed 00:14:51.805 Test: fabric_property_get ...[2024-11-19 13:06:54.989040] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:51.805 [2024-11-19 13:06:54.990277] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:51.805 [2024-11-19 13:06:54.992061] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:51.805 passed 00:14:51.805 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-19 13:06:55.069586] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:51.805 [2024-11-19 13:06:55.070826] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:51.805 [2024-11-19 13:06:55.072605] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:51.805 passed 00:14:52.113 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-19 13:06:55.150761] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:52.113 [2024-11-19 13:06:55.232958] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:52.113 [2024-11-19 13:06:55.248956] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:52.113 [2024-11-19 13:06:55.254046] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:52.113 passed 00:14:52.113 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-19 13:06:55.330244] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:52.113 [2024-11-19 13:06:55.331479] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:52.113 [2024-11-19 13:06:55.333269] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:52.113 passed 00:14:52.113 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-19 13:06:55.411204] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:52.430 [2024-11-19 13:06:55.487957] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:52.430 [2024-11-19 13:06:55.511959] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:52.430 [2024-11-19 13:06:55.517039] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:52.430 passed 00:14:52.430 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-19 13:06:55.592210] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:52.430 [2024-11-19 13:06:55.593450] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:52.430 [2024-11-19 13:06:55.593474] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:52.430 [2024-11-19 13:06:55.595236] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:52.430 passed 00:14:52.430 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-19 13:06:55.673144] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:52.430 [2024-11-19 13:06:55.764958] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:52.430 [2024-11-19 13:06:55.772959] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:52.430 [2024-11-19 13:06:55.780957] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:52.430 [2024-11-19 13:06:55.788953] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:52.705 [2024-11-19 13:06:55.818038] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:52.705 passed 00:14:52.705 Test: admin_create_io_sq_verify_pc ...[2024-11-19 13:06:55.893351] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:52.705 [2024-11-19 13:06:55.908961] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:52.705 [2024-11-19 13:06:55.926525] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:52.705 passed 00:14:52.705 Test: admin_create_io_qp_max_qps ...[2024-11-19 13:06:56.007090] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:54.080 [2024-11-19 13:06:57.119957] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:54.338 [2024-11-19 13:06:57.504808] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:54.338 passed 00:14:54.338 Test: admin_create_io_sq_shared_cq ...[2024-11-19 13:06:57.582380] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:54.596 [2024-11-19 13:06:57.716960] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:54.596 [2024-11-19 13:06:57.754007] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:54.596 passed 00:14:54.596 00:14:54.596 Run Summary: Type Total Ran Passed Failed Inactive 00:14:54.596 suites 1 1 n/a 0 0 00:14:54.596 tests 18 18 18 0 0 00:14:54.596 asserts 360 360 360 0 n/a 00:14:54.596 00:14:54.596 Elapsed time = 1.518 seconds 00:14:54.596 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2814450 00:14:54.596 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2814450 ']' 00:14:54.596 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2814450 00:14:54.596 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:54.596 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:54.596 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2814450 00:14:54.596 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:54.596 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:54.596 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2814450' 00:14:54.596 killing process with pid 2814450 00:14:54.596 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2814450 00:14:54.596 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2814450 00:14:54.855 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:54.855 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:54.855 00:14:54.855 real 0m5.665s 00:14:54.855 user 0m15.811s 00:14:54.855 sys 0m0.521s 00:14:54.855 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.855 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:54.855 ************************************ 00:14:54.855 END TEST nvmf_vfio_user_nvme_compliance 00:14:54.855 ************************************ 00:14:54.855 13:06:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:54.855 13:06:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:54.855 13:06:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:54.855 13:06:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:54.855 ************************************ 00:14:54.855 START TEST nvmf_vfio_user_fuzz 00:14:54.855 ************************************ 00:14:54.855 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:54.855 * Looking for test storage... 00:14:54.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:54.855 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:54.855 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:14:54.855 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:55.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.115 --rc genhtml_branch_coverage=1 00:14:55.115 --rc genhtml_function_coverage=1 00:14:55.115 --rc genhtml_legend=1 00:14:55.115 --rc geninfo_all_blocks=1 00:14:55.115 --rc geninfo_unexecuted_blocks=1 00:14:55.115 00:14:55.115 ' 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:55.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.115 --rc genhtml_branch_coverage=1 00:14:55.115 --rc genhtml_function_coverage=1 00:14:55.115 --rc genhtml_legend=1 00:14:55.115 --rc geninfo_all_blocks=1 00:14:55.115 --rc geninfo_unexecuted_blocks=1 00:14:55.115 00:14:55.115 ' 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:55.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.115 --rc genhtml_branch_coverage=1 00:14:55.115 --rc genhtml_function_coverage=1 00:14:55.115 --rc genhtml_legend=1 00:14:55.115 --rc geninfo_all_blocks=1 00:14:55.115 --rc geninfo_unexecuted_blocks=1 00:14:55.115 00:14:55.115 ' 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:55.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.115 --rc genhtml_branch_coverage=1 00:14:55.115 --rc genhtml_function_coverage=1 00:14:55.115 --rc genhtml_legend=1 00:14:55.115 --rc geninfo_all_blocks=1 00:14:55.115 --rc geninfo_unexecuted_blocks=1 00:14:55.115 00:14:55.115 ' 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:55.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:55.115 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:55.116 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:55.116 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:55.116 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:55.116 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:55.116 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:55.116 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:55.116 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:55.116 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:55.116 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2815545 00:14:55.116 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2815545' 00:14:55.116 Process pid: 2815545 00:14:55.116 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:55.116 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:55.116 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2815545 00:14:55.116 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2815545 ']' 00:14:55.116 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.116 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:55.116 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.116 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:55.116 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:55.375 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:55.375 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:55.375 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:56.308 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:56.308 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.308 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:56.308 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.308 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:56.308 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:56.308 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.308 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:56.308 malloc0 00:14:56.308 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.308 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:56.308 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.308 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:56.308 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.308 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:56.308 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.308 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:56.308 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.308 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:56.308 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.308 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:56.308 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.308 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:56.308 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:28.381 Fuzzing completed. Shutting down the fuzz application 00:15:28.381 00:15:28.381 Dumping successful admin opcodes: 00:15:28.381 8, 9, 10, 24, 00:15:28.381 Dumping successful io opcodes: 00:15:28.381 0, 00:15:28.381 NS: 0x20000081ef00 I/O qp, Total commands completed: 1015138, total successful commands: 3980, random_seed: 1496991424 00:15:28.381 NS: 0x20000081ef00 admin qp, Total commands completed: 251768, total successful commands: 2034, random_seed: 2923808448 00:15:28.381 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:28.381 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.381 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:28.381 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.381 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2815545 00:15:28.381 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2815545 ']' 00:15:28.381 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2815545 00:15:28.381 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:28.381 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:28.381 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2815545 00:15:28.381 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:28.381 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:28.381 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2815545' 00:15:28.381 killing process with pid 2815545 00:15:28.381 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2815545 00:15:28.381 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2815545 00:15:28.381 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:28.382 00:15:28.382 real 0m32.223s 00:15:28.382 user 0m29.961s 00:15:28.382 sys 0m31.647s 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:28.382 ************************************ 00:15:28.382 END TEST nvmf_vfio_user_fuzz 00:15:28.382 ************************************ 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:28.382 ************************************ 00:15:28.382 START TEST nvmf_auth_target 00:15:28.382 ************************************ 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:28.382 * Looking for test storage... 00:15:28.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:28.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.382 --rc genhtml_branch_coverage=1 00:15:28.382 --rc genhtml_function_coverage=1 00:15:28.382 --rc genhtml_legend=1 00:15:28.382 --rc geninfo_all_blocks=1 00:15:28.382 --rc geninfo_unexecuted_blocks=1 00:15:28.382 00:15:28.382 ' 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:28.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.382 --rc genhtml_branch_coverage=1 00:15:28.382 --rc genhtml_function_coverage=1 00:15:28.382 --rc genhtml_legend=1 00:15:28.382 --rc geninfo_all_blocks=1 00:15:28.382 --rc geninfo_unexecuted_blocks=1 00:15:28.382 00:15:28.382 ' 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:28.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.382 --rc genhtml_branch_coverage=1 00:15:28.382 --rc genhtml_function_coverage=1 00:15:28.382 --rc genhtml_legend=1 00:15:28.382 --rc geninfo_all_blocks=1 00:15:28.382 --rc geninfo_unexecuted_blocks=1 00:15:28.382 00:15:28.382 ' 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:28.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.382 --rc genhtml_branch_coverage=1 00:15:28.382 --rc genhtml_function_coverage=1 00:15:28.382 --rc genhtml_legend=1 00:15:28.382 --rc geninfo_all_blocks=1 00:15:28.382 --rc geninfo_unexecuted_blocks=1 00:15:28.382 00:15:28.382 ' 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.382 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:28.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:28.383 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.660 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:33.660 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:33.660 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:33.660 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:33.660 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:33.660 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:33.660 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:33.660 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:33.660 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:33.660 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:33.660 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:33.660 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:33.660 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:33.660 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:33.660 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:33.660 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:33.660 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:33.660 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:33.660 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:33.660 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:33.660 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:33.661 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:33.661 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:33.661 Found net devices under 0000:86:00.0: cvl_0_0 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:33.661 Found net devices under 0000:86:00.1: cvl_0_1 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:33.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:15:33.661 00:15:33.661 --- 10.0.0.2 ping statistics --- 00:15:33.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.661 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:33.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:15:33.661 00:15:33.661 --- 10.0.0.1 ping statistics --- 00:15:33.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.661 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2823924 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2823924 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2823924 ']' 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.661 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2823948 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e5ebde68fc0c8d7c1f2d5dc999bd195e28fe8d2a8cb68a14 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.d2a 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e5ebde68fc0c8d7c1f2d5dc999bd195e28fe8d2a8cb68a14 0 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e5ebde68fc0c8d7c1f2d5dc999bd195e28fe8d2a8cb68a14 0 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e5ebde68fc0c8d7c1f2d5dc999bd195e28fe8d2a8cb68a14 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.d2a 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.d2a 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.d2a 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1f94d92972f560532bf6fdd3645d87becd900439ea561a668b7249c4249df804 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Hl9 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1f94d92972f560532bf6fdd3645d87becd900439ea561a668b7249c4249df804 3 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1f94d92972f560532bf6fdd3645d87becd900439ea561a668b7249c4249df804 3 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1f94d92972f560532bf6fdd3645d87becd900439ea561a668b7249c4249df804 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Hl9 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Hl9 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Hl9 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=13aaf66ddddeac51bcad4ae38c9afad3 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.uKF 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 13aaf66ddddeac51bcad4ae38c9afad3 1 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 13aaf66ddddeac51bcad4ae38c9afad3 1 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=13aaf66ddddeac51bcad4ae38c9afad3 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:33.662 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.uKF 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.uKF 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.uKF 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=35d6e6177c087d2045edc7e9aaf68b733d6a6b6bf4eddba1 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.XTF 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 35d6e6177c087d2045edc7e9aaf68b733d6a6b6bf4eddba1 2 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 35d6e6177c087d2045edc7e9aaf68b733d6a6b6bf4eddba1 2 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=35d6e6177c087d2045edc7e9aaf68b733d6a6b6bf4eddba1 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.XTF 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.XTF 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.XTF 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.922 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8948f6528e3d0dd923b4e3b540c6d9d7ed79e8d0b4d7919e 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.sfc 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8948f6528e3d0dd923b4e3b540c6d9d7ed79e8d0b4d7919e 2 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8948f6528e3d0dd923b4e3b540c6d9d7ed79e8d0b4d7919e 2 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8948f6528e3d0dd923b4e3b540c6d9d7ed79e8d0b4d7919e 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.sfc 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.sfc 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.sfc 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fdde24eadf549e27e4fa8626395e236a 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.8E9 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fdde24eadf549e27e4fa8626395e236a 1 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fdde24eadf549e27e4fa8626395e236a 1 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fdde24eadf549e27e4fa8626395e236a 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.8E9 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.8E9 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.8E9 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=df21594a930a7736b6998c94dadabba7d789fa543797961b134fdde4d8863352 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.m7G 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key df21594a930a7736b6998c94dadabba7d789fa543797961b134fdde4d8863352 3 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 df21594a930a7736b6998c94dadabba7d789fa543797961b134fdde4d8863352 3 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=df21594a930a7736b6998c94dadabba7d789fa543797961b134fdde4d8863352 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.m7G 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.m7G 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.m7G 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2823924 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2823924 ']' 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.923 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.182 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.182 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:34.182 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2823948 /var/tmp/host.sock 00:15:34.182 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2823948 ']' 00:15:34.182 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:34.182 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.182 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:34.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:34.182 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.182 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.440 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.440 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:34.440 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:34.440 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.440 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.440 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.440 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:34.440 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.d2a 00:15:34.440 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.440 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.440 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.440 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.d2a 00:15:34.440 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.d2a 00:15:34.697 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Hl9 ]] 00:15:34.697 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Hl9 00:15:34.697 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.697 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.697 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.697 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Hl9 00:15:34.697 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Hl9 00:15:34.955 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:34.955 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.uKF 00:15:34.955 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.955 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.955 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.955 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.uKF 00:15:34.955 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.uKF 00:15:35.215 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.XTF ]] 00:15:35.215 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XTF 00:15:35.215 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.215 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.215 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.215 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XTF 00:15:35.215 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XTF 00:15:35.215 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:35.215 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.sfc 00:15:35.215 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.215 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.215 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.215 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.sfc 00:15:35.215 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.sfc 00:15:35.473 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.8E9 ]] 00:15:35.473 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8E9 00:15:35.473 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.473 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.474 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.474 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8E9 00:15:35.474 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8E9 00:15:35.732 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:35.732 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.m7G 00:15:35.732 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.732 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.732 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.732 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.m7G 00:15:35.732 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.m7G 00:15:35.998 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:35.998 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:35.998 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:35.998 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.998 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:35.998 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:35.998 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:35.998 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.998 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:35.998 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:35.999 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:35.999 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.999 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.999 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.999 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.999 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.999 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.999 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.999 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.262 00:15:36.262 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.262 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.262 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.521 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.521 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.521 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.521 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.521 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.521 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.521 { 00:15:36.521 "cntlid": 1, 00:15:36.521 "qid": 0, 00:15:36.521 "state": "enabled", 00:15:36.521 "thread": "nvmf_tgt_poll_group_000", 00:15:36.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:36.521 "listen_address": { 00:15:36.521 "trtype": "TCP", 00:15:36.521 "adrfam": "IPv4", 00:15:36.521 "traddr": "10.0.0.2", 00:15:36.521 "trsvcid": "4420" 00:15:36.521 }, 00:15:36.521 "peer_address": { 00:15:36.521 "trtype": "TCP", 00:15:36.521 "adrfam": "IPv4", 00:15:36.521 "traddr": "10.0.0.1", 00:15:36.521 "trsvcid": "51096" 00:15:36.521 }, 00:15:36.521 "auth": { 00:15:36.521 "state": "completed", 00:15:36.521 "digest": "sha256", 00:15:36.521 "dhgroup": "null" 00:15:36.521 } 00:15:36.521 } 00:15:36.521 ]' 00:15:36.521 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.521 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:36.521 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.780 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:36.780 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.780 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.780 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.780 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.039 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:15:37.039 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:15:37.606 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.606 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:37.606 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.606 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.606 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.606 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.606 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:37.606 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:37.606 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:37.606 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.606 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:37.606 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:37.606 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:37.606 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.606 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.606 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.606 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.606 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.606 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.607 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.607 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.876 00:15:37.876 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.876 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.876 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.133 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.133 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.133 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.133 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.133 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.133 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.133 { 00:15:38.133 "cntlid": 3, 00:15:38.133 "qid": 0, 00:15:38.133 "state": "enabled", 00:15:38.133 "thread": "nvmf_tgt_poll_group_000", 00:15:38.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:38.133 "listen_address": { 00:15:38.133 "trtype": "TCP", 00:15:38.133 "adrfam": "IPv4", 00:15:38.133 "traddr": "10.0.0.2", 00:15:38.133 "trsvcid": "4420" 00:15:38.133 }, 00:15:38.133 "peer_address": { 00:15:38.133 "trtype": "TCP", 00:15:38.133 "adrfam": "IPv4", 00:15:38.133 "traddr": "10.0.0.1", 00:15:38.133 "trsvcid": "51122" 00:15:38.133 }, 00:15:38.133 "auth": { 00:15:38.133 "state": "completed", 00:15:38.133 "digest": "sha256", 00:15:38.133 "dhgroup": "null" 00:15:38.133 } 00:15:38.133 } 00:15:38.133 ]' 00:15:38.133 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.133 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:38.133 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.391 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:38.391 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.391 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.391 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.391 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.650 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:15:38.650 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:15:39.217 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.217 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:39.217 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.217 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.217 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.217 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.217 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:39.217 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:39.217 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:39.217 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.217 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:39.217 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:39.217 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:39.217 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.217 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.217 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.217 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.217 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.217 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.217 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.217 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.475 00:15:39.475 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.475 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.475 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.732 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.732 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.732 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.732 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.732 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.732 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.732 { 00:15:39.732 "cntlid": 5, 00:15:39.732 "qid": 0, 00:15:39.732 "state": "enabled", 00:15:39.732 "thread": "nvmf_tgt_poll_group_000", 00:15:39.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:39.732 "listen_address": { 00:15:39.732 "trtype": "TCP", 00:15:39.732 "adrfam": "IPv4", 00:15:39.732 "traddr": "10.0.0.2", 00:15:39.732 "trsvcid": "4420" 00:15:39.732 }, 00:15:39.732 "peer_address": { 00:15:39.732 "trtype": "TCP", 00:15:39.732 "adrfam": "IPv4", 00:15:39.732 "traddr": "10.0.0.1", 00:15:39.732 "trsvcid": "51158" 00:15:39.732 }, 00:15:39.732 "auth": { 00:15:39.732 "state": "completed", 00:15:39.732 "digest": "sha256", 00:15:39.732 "dhgroup": "null" 00:15:39.732 } 00:15:39.732 } 00:15:39.732 ]' 00:15:39.732 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.732 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.732 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.990 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:39.990 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.990 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.990 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.990 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.990 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:15:39.990 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:15:40.555 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.814 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.814 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.814 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.814 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.814 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.814 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:40.814 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:40.814 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:40.814 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.814 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:40.814 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:40.814 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:40.814 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.814 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:40.814 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.814 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.814 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.814 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:40.814 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:40.814 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:41.073 00:15:41.073 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.073 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.073 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.332 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.332 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.332 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.332 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.332 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.332 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.332 { 00:15:41.332 "cntlid": 7, 00:15:41.332 "qid": 0, 00:15:41.332 "state": "enabled", 00:15:41.332 "thread": "nvmf_tgt_poll_group_000", 00:15:41.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:41.332 "listen_address": { 00:15:41.332 "trtype": "TCP", 00:15:41.332 "adrfam": "IPv4", 00:15:41.332 "traddr": "10.0.0.2", 00:15:41.332 "trsvcid": "4420" 00:15:41.332 }, 00:15:41.332 "peer_address": { 00:15:41.332 "trtype": "TCP", 00:15:41.332 "adrfam": "IPv4", 00:15:41.332 "traddr": "10.0.0.1", 00:15:41.332 "trsvcid": "33610" 00:15:41.332 }, 00:15:41.332 "auth": { 00:15:41.332 "state": "completed", 00:15:41.332 "digest": "sha256", 00:15:41.332 "dhgroup": "null" 00:15:41.332 } 00:15:41.332 } 00:15:41.332 ]' 00:15:41.332 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.332 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:41.332 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.591 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:41.591 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.591 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.591 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.591 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.591 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:15:41.591 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:15:42.160 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.160 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:42.160 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.160 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.160 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.419 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:42.419 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.419 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:42.419 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:42.419 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:42.419 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.419 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:42.419 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:42.419 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:42.419 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.419 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.419 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.419 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.419 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.419 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.419 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.419 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.678 00:15:42.678 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.678 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.678 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.938 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.938 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.938 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.938 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.938 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.938 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.938 { 00:15:42.938 "cntlid": 9, 00:15:42.938 "qid": 0, 00:15:42.938 "state": "enabled", 00:15:42.938 "thread": "nvmf_tgt_poll_group_000", 00:15:42.938 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:42.938 "listen_address": { 00:15:42.938 "trtype": "TCP", 00:15:42.938 "adrfam": "IPv4", 00:15:42.938 "traddr": "10.0.0.2", 00:15:42.938 "trsvcid": "4420" 00:15:42.938 }, 00:15:42.938 "peer_address": { 00:15:42.938 "trtype": "TCP", 00:15:42.938 "adrfam": "IPv4", 00:15:42.938 "traddr": "10.0.0.1", 00:15:42.938 "trsvcid": "33628" 00:15:42.938 }, 00:15:42.938 "auth": { 00:15:42.938 "state": "completed", 00:15:42.938 "digest": "sha256", 00:15:42.938 "dhgroup": "ffdhe2048" 00:15:42.938 } 00:15:42.938 } 00:15:42.938 ]' 00:15:42.938 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.938 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.938 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.938 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:42.938 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.197 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.197 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.197 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.197 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:15:43.197 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:15:43.765 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.765 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:43.765 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.765 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.765 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.765 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.765 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:43.765 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:44.024 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:44.024 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.024 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:44.024 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:44.024 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:44.024 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.024 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.024 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.024 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.024 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.024 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.024 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.024 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.282 00:15:44.282 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.282 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.282 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.541 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.541 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.541 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.541 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.541 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.541 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.541 { 00:15:44.541 "cntlid": 11, 00:15:44.541 "qid": 0, 00:15:44.541 "state": "enabled", 00:15:44.541 "thread": "nvmf_tgt_poll_group_000", 00:15:44.541 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:44.541 "listen_address": { 00:15:44.541 "trtype": "TCP", 00:15:44.541 "adrfam": "IPv4", 00:15:44.541 "traddr": "10.0.0.2", 00:15:44.541 "trsvcid": "4420" 00:15:44.541 }, 00:15:44.541 "peer_address": { 00:15:44.541 "trtype": "TCP", 00:15:44.541 "adrfam": "IPv4", 00:15:44.541 "traddr": "10.0.0.1", 00:15:44.541 "trsvcid": "33660" 00:15:44.541 }, 00:15:44.541 "auth": { 00:15:44.541 "state": "completed", 00:15:44.541 "digest": "sha256", 00:15:44.541 "dhgroup": "ffdhe2048" 00:15:44.541 } 00:15:44.541 } 00:15:44.541 ]' 00:15:44.541 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.541 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.541 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.541 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:44.541 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.541 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.541 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.541 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.800 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:15:44.800 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:15:45.368 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.368 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:45.368 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.368 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.368 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.368 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.368 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:45.368 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:45.627 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:45.627 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.627 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:45.627 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:45.627 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:45.627 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.627 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.627 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.627 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.627 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.627 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.627 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.627 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.887 00:15:45.887 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.887 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.887 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.145 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.145 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.145 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.145 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.145 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.145 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.145 { 00:15:46.145 "cntlid": 13, 00:15:46.145 "qid": 0, 00:15:46.145 "state": "enabled", 00:15:46.145 "thread": "nvmf_tgt_poll_group_000", 00:15:46.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:46.145 "listen_address": { 00:15:46.145 "trtype": "TCP", 00:15:46.145 "adrfam": "IPv4", 00:15:46.145 "traddr": "10.0.0.2", 00:15:46.145 "trsvcid": "4420" 00:15:46.145 }, 00:15:46.145 "peer_address": { 00:15:46.145 "trtype": "TCP", 00:15:46.145 "adrfam": "IPv4", 00:15:46.145 "traddr": "10.0.0.1", 00:15:46.145 "trsvcid": "33692" 00:15:46.145 }, 00:15:46.145 "auth": { 00:15:46.145 "state": "completed", 00:15:46.145 "digest": "sha256", 00:15:46.145 "dhgroup": "ffdhe2048" 00:15:46.145 } 00:15:46.145 } 00:15:46.145 ]' 00:15:46.145 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.145 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.145 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.145 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:46.145 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.145 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.145 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.145 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.403 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:15:46.403 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:15:46.970 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.970 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.970 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.970 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.970 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.970 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.970 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:46.970 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:47.229 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:47.229 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.229 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:47.229 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:47.229 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:47.229 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.229 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:47.229 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.229 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.229 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.229 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:47.229 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.229 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.488 00:15:47.488 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.488 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.488 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.748 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.748 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.748 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.748 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.748 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.748 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.748 { 00:15:47.748 "cntlid": 15, 00:15:47.748 "qid": 0, 00:15:47.748 "state": "enabled", 00:15:47.748 "thread": "nvmf_tgt_poll_group_000", 00:15:47.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:47.748 "listen_address": { 00:15:47.748 "trtype": "TCP", 00:15:47.748 "adrfam": "IPv4", 00:15:47.748 "traddr": "10.0.0.2", 00:15:47.748 "trsvcid": "4420" 00:15:47.748 }, 00:15:47.748 "peer_address": { 00:15:47.748 "trtype": "TCP", 00:15:47.748 "adrfam": "IPv4", 00:15:47.748 "traddr": "10.0.0.1", 00:15:47.748 "trsvcid": "33718" 00:15:47.748 }, 00:15:47.748 "auth": { 00:15:47.748 "state": "completed", 00:15:47.748 "digest": "sha256", 00:15:47.748 "dhgroup": "ffdhe2048" 00:15:47.748 } 00:15:47.748 } 00:15:47.748 ]' 00:15:47.748 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.748 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.748 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.748 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:47.748 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.748 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.748 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.748 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.006 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:15:48.006 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:15:48.573 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.573 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:48.573 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.573 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.573 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.573 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.573 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.573 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:48.574 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:48.832 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:48.832 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.832 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:48.832 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:48.832 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:48.832 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.832 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.832 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.832 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.833 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.833 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.833 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.833 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.091 00:15:49.091 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.091 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.091 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.350 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.350 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.350 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.350 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.350 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.350 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.350 { 00:15:49.350 "cntlid": 17, 00:15:49.350 "qid": 0, 00:15:49.350 "state": "enabled", 00:15:49.350 "thread": "nvmf_tgt_poll_group_000", 00:15:49.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:49.350 "listen_address": { 00:15:49.350 "trtype": "TCP", 00:15:49.350 "adrfam": "IPv4", 00:15:49.350 "traddr": "10.0.0.2", 00:15:49.350 "trsvcid": "4420" 00:15:49.350 }, 00:15:49.350 "peer_address": { 00:15:49.350 "trtype": "TCP", 00:15:49.350 "adrfam": "IPv4", 00:15:49.350 "traddr": "10.0.0.1", 00:15:49.350 "trsvcid": "33740" 00:15:49.350 }, 00:15:49.350 "auth": { 00:15:49.350 "state": "completed", 00:15:49.350 "digest": "sha256", 00:15:49.350 "dhgroup": "ffdhe3072" 00:15:49.350 } 00:15:49.350 } 00:15:49.350 ]' 00:15:49.350 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.350 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.350 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.350 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:49.350 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.350 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.350 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.350 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.609 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:15:49.609 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:15:50.177 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.177 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:50.177 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.177 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.177 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.177 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.177 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:50.177 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:50.435 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:50.435 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.435 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:50.435 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:50.435 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:50.436 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.436 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.436 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.436 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.436 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.436 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.436 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.436 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.694 00:15:50.694 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.694 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.694 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.953 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.953 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.953 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.953 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.953 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.953 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.953 { 00:15:50.953 "cntlid": 19, 00:15:50.953 "qid": 0, 00:15:50.953 "state": "enabled", 00:15:50.953 "thread": "nvmf_tgt_poll_group_000", 00:15:50.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:50.953 "listen_address": { 00:15:50.953 "trtype": "TCP", 00:15:50.953 "adrfam": "IPv4", 00:15:50.953 "traddr": "10.0.0.2", 00:15:50.953 "trsvcid": "4420" 00:15:50.953 }, 00:15:50.953 "peer_address": { 00:15:50.953 "trtype": "TCP", 00:15:50.953 "adrfam": "IPv4", 00:15:50.953 "traddr": "10.0.0.1", 00:15:50.953 "trsvcid": "45382" 00:15:50.953 }, 00:15:50.953 "auth": { 00:15:50.953 "state": "completed", 00:15:50.953 "digest": "sha256", 00:15:50.953 "dhgroup": "ffdhe3072" 00:15:50.953 } 00:15:50.953 } 00:15:50.953 ]' 00:15:50.953 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.953 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:50.953 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.953 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:50.953 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.953 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.953 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.953 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.211 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:15:51.211 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:15:51.779 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.779 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:51.779 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.779 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.779 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.779 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.779 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:51.779 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:52.038 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:52.038 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.038 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:52.038 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:52.038 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:52.038 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.038 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.038 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.038 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.038 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.038 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.038 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.038 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.297 00:15:52.297 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.297 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.297 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.555 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.555 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.555 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.555 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.555 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.555 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.555 { 00:15:52.555 "cntlid": 21, 00:15:52.555 "qid": 0, 00:15:52.556 "state": "enabled", 00:15:52.556 "thread": "nvmf_tgt_poll_group_000", 00:15:52.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:52.556 "listen_address": { 00:15:52.556 "trtype": "TCP", 00:15:52.556 "adrfam": "IPv4", 00:15:52.556 "traddr": "10.0.0.2", 00:15:52.556 "trsvcid": "4420" 00:15:52.556 }, 00:15:52.556 "peer_address": { 00:15:52.556 "trtype": "TCP", 00:15:52.556 "adrfam": "IPv4", 00:15:52.556 "traddr": "10.0.0.1", 00:15:52.556 "trsvcid": "45400" 00:15:52.556 }, 00:15:52.556 "auth": { 00:15:52.556 "state": "completed", 00:15:52.556 "digest": "sha256", 00:15:52.556 "dhgroup": "ffdhe3072" 00:15:52.556 } 00:15:52.556 } 00:15:52.556 ]' 00:15:52.556 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.556 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.556 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.556 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:52.556 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.556 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.556 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.556 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.814 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:15:52.814 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:15:53.381 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.381 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.381 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.381 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.381 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.381 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.381 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:53.381 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:53.640 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:53.640 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.640 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:53.640 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:53.640 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:53.640 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.640 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:53.640 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.640 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.640 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.640 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:53.640 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:53.640 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:53.899 00:15:53.899 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.899 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.899 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.158 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.158 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.158 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.158 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.158 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.158 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.158 { 00:15:54.158 "cntlid": 23, 00:15:54.158 "qid": 0, 00:15:54.158 "state": "enabled", 00:15:54.158 "thread": "nvmf_tgt_poll_group_000", 00:15:54.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:54.158 "listen_address": { 00:15:54.158 "trtype": "TCP", 00:15:54.158 "adrfam": "IPv4", 00:15:54.158 "traddr": "10.0.0.2", 00:15:54.158 "trsvcid": "4420" 00:15:54.158 }, 00:15:54.158 "peer_address": { 00:15:54.158 "trtype": "TCP", 00:15:54.158 "adrfam": "IPv4", 00:15:54.158 "traddr": "10.0.0.1", 00:15:54.158 "trsvcid": "45426" 00:15:54.158 }, 00:15:54.158 "auth": { 00:15:54.158 "state": "completed", 00:15:54.158 "digest": "sha256", 00:15:54.158 "dhgroup": "ffdhe3072" 00:15:54.158 } 00:15:54.158 } 00:15:54.158 ]' 00:15:54.158 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.158 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.158 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.158 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:54.158 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.158 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.158 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.158 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.416 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:15:54.416 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:15:54.983 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.983 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:54.983 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.983 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.983 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.983 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:54.983 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.983 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:54.983 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:55.242 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:55.242 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.242 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:55.242 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:55.242 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:55.242 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.242 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.242 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.242 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.242 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.242 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.242 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.242 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.500 00:15:55.500 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.500 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.500 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.760 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.760 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.760 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.760 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.760 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.760 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.760 { 00:15:55.760 "cntlid": 25, 00:15:55.760 "qid": 0, 00:15:55.760 "state": "enabled", 00:15:55.760 "thread": "nvmf_tgt_poll_group_000", 00:15:55.760 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:55.760 "listen_address": { 00:15:55.760 "trtype": "TCP", 00:15:55.760 "adrfam": "IPv4", 00:15:55.760 "traddr": "10.0.0.2", 00:15:55.760 "trsvcid": "4420" 00:15:55.760 }, 00:15:55.760 "peer_address": { 00:15:55.760 "trtype": "TCP", 00:15:55.760 "adrfam": "IPv4", 00:15:55.760 "traddr": "10.0.0.1", 00:15:55.760 "trsvcid": "45452" 00:15:55.760 }, 00:15:55.760 "auth": { 00:15:55.760 "state": "completed", 00:15:55.760 "digest": "sha256", 00:15:55.760 "dhgroup": "ffdhe4096" 00:15:55.760 } 00:15:55.760 } 00:15:55.760 ]' 00:15:55.760 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.760 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.760 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.760 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:55.760 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.760 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.760 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.760 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.019 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:15:56.019 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:15:56.586 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.586 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:56.586 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.586 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.586 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.586 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.586 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:56.586 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:56.845 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:56.846 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.846 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:56.846 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:56.846 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:56.846 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.846 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.846 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.846 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.846 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.846 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.846 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.846 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.105 00:15:57.105 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.105 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.105 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.364 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.364 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.364 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.364 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.364 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.364 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.364 { 00:15:57.364 "cntlid": 27, 00:15:57.364 "qid": 0, 00:15:57.364 "state": "enabled", 00:15:57.364 "thread": "nvmf_tgt_poll_group_000", 00:15:57.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:57.364 "listen_address": { 00:15:57.364 "trtype": "TCP", 00:15:57.364 "adrfam": "IPv4", 00:15:57.364 "traddr": "10.0.0.2", 00:15:57.364 "trsvcid": "4420" 00:15:57.364 }, 00:15:57.364 "peer_address": { 00:15:57.364 "trtype": "TCP", 00:15:57.364 "adrfam": "IPv4", 00:15:57.364 "traddr": "10.0.0.1", 00:15:57.364 "trsvcid": "45486" 00:15:57.364 }, 00:15:57.364 "auth": { 00:15:57.364 "state": "completed", 00:15:57.364 "digest": "sha256", 00:15:57.364 "dhgroup": "ffdhe4096" 00:15:57.364 } 00:15:57.364 } 00:15:57.364 ]' 00:15:57.364 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.364 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:57.364 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.364 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:57.364 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.622 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.622 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.622 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.622 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:15:57.622 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:15:58.190 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.190 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:58.190 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.190 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.190 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.190 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.190 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:58.190 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:58.448 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:58.448 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.448 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:58.448 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:58.448 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:58.448 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.448 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.448 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.448 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.448 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.448 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.448 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.448 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.709 00:15:58.709 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.709 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.709 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.998 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.998 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.998 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.998 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.998 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.998 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.998 { 00:15:58.998 "cntlid": 29, 00:15:58.998 "qid": 0, 00:15:58.998 "state": "enabled", 00:15:58.998 "thread": "nvmf_tgt_poll_group_000", 00:15:58.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:58.998 "listen_address": { 00:15:58.998 "trtype": "TCP", 00:15:58.998 "adrfam": "IPv4", 00:15:58.998 "traddr": "10.0.0.2", 00:15:58.998 "trsvcid": "4420" 00:15:58.998 }, 00:15:58.998 "peer_address": { 00:15:58.998 "trtype": "TCP", 00:15:58.998 "adrfam": "IPv4", 00:15:58.998 "traddr": "10.0.0.1", 00:15:58.998 "trsvcid": "45520" 00:15:58.998 }, 00:15:58.998 "auth": { 00:15:58.998 "state": "completed", 00:15:58.998 "digest": "sha256", 00:15:58.998 "dhgroup": "ffdhe4096" 00:15:58.998 } 00:15:58.998 } 00:15:58.998 ]' 00:15:58.998 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.998 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.998 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.998 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:58.998 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.998 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.998 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.998 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.284 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:15:59.284 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:15:59.864 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.864 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.864 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.865 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.865 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.865 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.865 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:59.865 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:00.123 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:00.123 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.123 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:00.123 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:00.123 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:00.123 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.123 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:00.123 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.123 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.123 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.123 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:00.123 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:00.124 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:00.383 00:16:00.383 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.383 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.383 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.641 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.641 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.641 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.641 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.641 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.641 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.641 { 00:16:00.641 "cntlid": 31, 00:16:00.641 "qid": 0, 00:16:00.641 "state": "enabled", 00:16:00.641 "thread": "nvmf_tgt_poll_group_000", 00:16:00.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:00.641 "listen_address": { 00:16:00.641 "trtype": "TCP", 00:16:00.641 "adrfam": "IPv4", 00:16:00.641 "traddr": "10.0.0.2", 00:16:00.642 "trsvcid": "4420" 00:16:00.642 }, 00:16:00.642 "peer_address": { 00:16:00.642 "trtype": "TCP", 00:16:00.642 "adrfam": "IPv4", 00:16:00.642 "traddr": "10.0.0.1", 00:16:00.642 "trsvcid": "45548" 00:16:00.642 }, 00:16:00.642 "auth": { 00:16:00.642 "state": "completed", 00:16:00.642 "digest": "sha256", 00:16:00.642 "dhgroup": "ffdhe4096" 00:16:00.642 } 00:16:00.642 } 00:16:00.642 ]' 00:16:00.642 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.642 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:00.642 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.642 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:00.642 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.642 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.642 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.642 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.900 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:16:00.900 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:16:01.467 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.467 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.467 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.467 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.467 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.467 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:01.467 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.467 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:01.467 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:01.726 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:01.726 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.726 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:01.726 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:01.726 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:01.726 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.726 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.726 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.726 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.726 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.726 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.726 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.726 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.984 00:16:01.984 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.984 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.984 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.242 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.242 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.242 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.242 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.242 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.242 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.242 { 00:16:02.242 "cntlid": 33, 00:16:02.242 "qid": 0, 00:16:02.242 "state": "enabled", 00:16:02.242 "thread": "nvmf_tgt_poll_group_000", 00:16:02.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:02.242 "listen_address": { 00:16:02.242 "trtype": "TCP", 00:16:02.242 "adrfam": "IPv4", 00:16:02.242 "traddr": "10.0.0.2", 00:16:02.242 "trsvcid": "4420" 00:16:02.242 }, 00:16:02.242 "peer_address": { 00:16:02.242 "trtype": "TCP", 00:16:02.242 "adrfam": "IPv4", 00:16:02.242 "traddr": "10.0.0.1", 00:16:02.242 "trsvcid": "38182" 00:16:02.242 }, 00:16:02.242 "auth": { 00:16:02.242 "state": "completed", 00:16:02.242 "digest": "sha256", 00:16:02.242 "dhgroup": "ffdhe6144" 00:16:02.242 } 00:16:02.242 } 00:16:02.242 ]' 00:16:02.242 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.242 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:02.242 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.500 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:02.500 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.500 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.500 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.500 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.758 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:16:02.759 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:16:03.327 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.327 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:03.327 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.327 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.327 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.327 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.327 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:03.327 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:03.327 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:03.327 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.327 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:03.327 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:03.327 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:03.327 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.327 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.327 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.327 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.327 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.327 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.327 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.327 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.896 00:16:03.896 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.896 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.896 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.896 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.896 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.896 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.896 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.896 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.896 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.896 { 00:16:03.896 "cntlid": 35, 00:16:03.896 "qid": 0, 00:16:03.896 "state": "enabled", 00:16:03.896 "thread": "nvmf_tgt_poll_group_000", 00:16:03.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:03.896 "listen_address": { 00:16:03.896 "trtype": "TCP", 00:16:03.896 "adrfam": "IPv4", 00:16:03.896 "traddr": "10.0.0.2", 00:16:03.896 "trsvcid": "4420" 00:16:03.896 }, 00:16:03.896 "peer_address": { 00:16:03.896 "trtype": "TCP", 00:16:03.896 "adrfam": "IPv4", 00:16:03.896 "traddr": "10.0.0.1", 00:16:03.896 "trsvcid": "38210" 00:16:03.896 }, 00:16:03.896 "auth": { 00:16:03.896 "state": "completed", 00:16:03.896 "digest": "sha256", 00:16:03.896 "dhgroup": "ffdhe6144" 00:16:03.896 } 00:16:03.896 } 00:16:03.896 ]' 00:16:03.896 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.896 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.896 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.155 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:04.155 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.155 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.155 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.155 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.414 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:16:04.414 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:16:04.983 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.983 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.983 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.983 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.983 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.983 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.983 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:04.983 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:04.983 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:04.983 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.983 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:04.983 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:04.983 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:04.983 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.983 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.983 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.983 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.983 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.983 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.983 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.983 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.551 00:16:05.551 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.551 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.551 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.551 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.551 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.551 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.551 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.551 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.551 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.551 { 00:16:05.551 "cntlid": 37, 00:16:05.551 "qid": 0, 00:16:05.551 "state": "enabled", 00:16:05.551 "thread": "nvmf_tgt_poll_group_000", 00:16:05.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:05.551 "listen_address": { 00:16:05.551 "trtype": "TCP", 00:16:05.551 "adrfam": "IPv4", 00:16:05.551 "traddr": "10.0.0.2", 00:16:05.551 "trsvcid": "4420" 00:16:05.551 }, 00:16:05.551 "peer_address": { 00:16:05.551 "trtype": "TCP", 00:16:05.551 "adrfam": "IPv4", 00:16:05.551 "traddr": "10.0.0.1", 00:16:05.551 "trsvcid": "38228" 00:16:05.551 }, 00:16:05.551 "auth": { 00:16:05.551 "state": "completed", 00:16:05.551 "digest": "sha256", 00:16:05.551 "dhgroup": "ffdhe6144" 00:16:05.551 } 00:16:05.551 } 00:16:05.551 ]' 00:16:05.551 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.551 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.551 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.810 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:05.810 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.810 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.810 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.810 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.068 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:16:06.068 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:16:06.635 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.635 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:06.635 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.635 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.635 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.635 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.635 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:06.635 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:06.635 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:06.635 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.635 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:06.635 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:06.635 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:06.635 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.635 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:06.635 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.635 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.635 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.635 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:06.635 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:06.635 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.202 00:16:07.202 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.202 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.202 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.202 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.202 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.202 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.202 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.202 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.202 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.202 { 00:16:07.202 "cntlid": 39, 00:16:07.202 "qid": 0, 00:16:07.202 "state": "enabled", 00:16:07.202 "thread": "nvmf_tgt_poll_group_000", 00:16:07.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:07.202 "listen_address": { 00:16:07.202 "trtype": "TCP", 00:16:07.202 "adrfam": "IPv4", 00:16:07.202 "traddr": "10.0.0.2", 00:16:07.202 "trsvcid": "4420" 00:16:07.202 }, 00:16:07.202 "peer_address": { 00:16:07.202 "trtype": "TCP", 00:16:07.202 "adrfam": "IPv4", 00:16:07.202 "traddr": "10.0.0.1", 00:16:07.202 "trsvcid": "38246" 00:16:07.202 }, 00:16:07.202 "auth": { 00:16:07.202 "state": "completed", 00:16:07.202 "digest": "sha256", 00:16:07.202 "dhgroup": "ffdhe6144" 00:16:07.202 } 00:16:07.202 } 00:16:07.202 ]' 00:16:07.202 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.461 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.461 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.461 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:07.461 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.461 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.461 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.461 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.720 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:16:07.720 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:16:08.287 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.287 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.287 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.287 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.287 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.287 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.287 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.287 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:08.287 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:08.546 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:08.546 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.546 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:08.546 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:08.546 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:08.546 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.546 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.546 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.546 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.546 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.546 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.546 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.546 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.805 00:16:09.064 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.064 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.064 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.064 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.064 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.064 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.064 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.064 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.064 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.064 { 00:16:09.064 "cntlid": 41, 00:16:09.064 "qid": 0, 00:16:09.064 "state": "enabled", 00:16:09.064 "thread": "nvmf_tgt_poll_group_000", 00:16:09.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:09.064 "listen_address": { 00:16:09.064 "trtype": "TCP", 00:16:09.064 "adrfam": "IPv4", 00:16:09.064 "traddr": "10.0.0.2", 00:16:09.064 "trsvcid": "4420" 00:16:09.064 }, 00:16:09.064 "peer_address": { 00:16:09.064 "trtype": "TCP", 00:16:09.064 "adrfam": "IPv4", 00:16:09.064 "traddr": "10.0.0.1", 00:16:09.064 "trsvcid": "38274" 00:16:09.064 }, 00:16:09.064 "auth": { 00:16:09.064 "state": "completed", 00:16:09.064 "digest": "sha256", 00:16:09.064 "dhgroup": "ffdhe8192" 00:16:09.064 } 00:16:09.064 } 00:16:09.064 ]' 00:16:09.064 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.064 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.064 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.323 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:09.323 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.323 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.323 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.323 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.582 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:16:09.582 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:16:10.150 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.150 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:10.150 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.150 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.150 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.150 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.150 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:10.150 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:10.150 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:10.150 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.150 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:10.150 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:10.150 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:10.150 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.150 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.150 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.150 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.150 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.150 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.150 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.150 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.717 00:16:10.717 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.717 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.717 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.975 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.975 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.975 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.975 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.975 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.975 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.975 { 00:16:10.975 "cntlid": 43, 00:16:10.975 "qid": 0, 00:16:10.975 "state": "enabled", 00:16:10.975 "thread": "nvmf_tgt_poll_group_000", 00:16:10.975 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:10.975 "listen_address": { 00:16:10.975 "trtype": "TCP", 00:16:10.975 "adrfam": "IPv4", 00:16:10.975 "traddr": "10.0.0.2", 00:16:10.975 "trsvcid": "4420" 00:16:10.975 }, 00:16:10.975 "peer_address": { 00:16:10.975 "trtype": "TCP", 00:16:10.975 "adrfam": "IPv4", 00:16:10.975 "traddr": "10.0.0.1", 00:16:10.976 "trsvcid": "38298" 00:16:10.976 }, 00:16:10.976 "auth": { 00:16:10.976 "state": "completed", 00:16:10.976 "digest": "sha256", 00:16:10.976 "dhgroup": "ffdhe8192" 00:16:10.976 } 00:16:10.976 } 00:16:10.976 ]' 00:16:10.976 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.976 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.976 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.976 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:10.976 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.976 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.976 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.976 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.234 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:16:11.234 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:16:11.801 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.801 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.801 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.801 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.801 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.801 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.801 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:11.801 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:12.060 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:12.060 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.060 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:12.060 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:12.060 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:12.060 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.060 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.060 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.060 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.060 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.060 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.060 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.060 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.628 00:16:12.628 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.628 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.628 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.887 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.887 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.887 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.887 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.887 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.887 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.887 { 00:16:12.887 "cntlid": 45, 00:16:12.887 "qid": 0, 00:16:12.887 "state": "enabled", 00:16:12.887 "thread": "nvmf_tgt_poll_group_000", 00:16:12.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:12.887 "listen_address": { 00:16:12.887 "trtype": "TCP", 00:16:12.887 "adrfam": "IPv4", 00:16:12.887 "traddr": "10.0.0.2", 00:16:12.887 "trsvcid": "4420" 00:16:12.887 }, 00:16:12.887 "peer_address": { 00:16:12.887 "trtype": "TCP", 00:16:12.888 "adrfam": "IPv4", 00:16:12.888 "traddr": "10.0.0.1", 00:16:12.888 "trsvcid": "49400" 00:16:12.888 }, 00:16:12.888 "auth": { 00:16:12.888 "state": "completed", 00:16:12.888 "digest": "sha256", 00:16:12.888 "dhgroup": "ffdhe8192" 00:16:12.888 } 00:16:12.888 } 00:16:12.888 ]' 00:16:12.888 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.888 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.888 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.888 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:12.888 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.888 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.888 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.888 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.147 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:16:13.147 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:16:13.714 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.714 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.714 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.714 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.714 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.714 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.714 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:13.714 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:13.973 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:13.973 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.973 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:13.973 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:13.973 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:13.973 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.973 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:13.973 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.973 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.974 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.974 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:13.974 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.974 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.541 00:16:14.541 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.541 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.541 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.541 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.541 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.541 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.541 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.541 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.541 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.541 { 00:16:14.541 "cntlid": 47, 00:16:14.541 "qid": 0, 00:16:14.541 "state": "enabled", 00:16:14.541 "thread": "nvmf_tgt_poll_group_000", 00:16:14.541 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:14.541 "listen_address": { 00:16:14.541 "trtype": "TCP", 00:16:14.541 "adrfam": "IPv4", 00:16:14.541 "traddr": "10.0.0.2", 00:16:14.541 "trsvcid": "4420" 00:16:14.541 }, 00:16:14.541 "peer_address": { 00:16:14.541 "trtype": "TCP", 00:16:14.541 "adrfam": "IPv4", 00:16:14.541 "traddr": "10.0.0.1", 00:16:14.541 "trsvcid": "49434" 00:16:14.541 }, 00:16:14.541 "auth": { 00:16:14.541 "state": "completed", 00:16:14.541 "digest": "sha256", 00:16:14.541 "dhgroup": "ffdhe8192" 00:16:14.541 } 00:16:14.541 } 00:16:14.541 ]' 00:16:14.541 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.541 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.541 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.800 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:14.800 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.800 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.800 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.800 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.059 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:16:15.059 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:16:15.627 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.627 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:15.627 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.627 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.627 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.627 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:15.627 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.627 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.627 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:15.627 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:15.627 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:15.627 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.627 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:15.627 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:15.627 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:15.627 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.627 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.627 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.627 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.627 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.627 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.627 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.627 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.886 00:16:15.886 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.886 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.886 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.144 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.144 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.144 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.144 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.144 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.144 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.144 { 00:16:16.144 "cntlid": 49, 00:16:16.144 "qid": 0, 00:16:16.144 "state": "enabled", 00:16:16.144 "thread": "nvmf_tgt_poll_group_000", 00:16:16.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:16.144 "listen_address": { 00:16:16.144 "trtype": "TCP", 00:16:16.144 "adrfam": "IPv4", 00:16:16.144 "traddr": "10.0.0.2", 00:16:16.144 "trsvcid": "4420" 00:16:16.144 }, 00:16:16.144 "peer_address": { 00:16:16.144 "trtype": "TCP", 00:16:16.144 "adrfam": "IPv4", 00:16:16.144 "traddr": "10.0.0.1", 00:16:16.144 "trsvcid": "49464" 00:16:16.144 }, 00:16:16.144 "auth": { 00:16:16.144 "state": "completed", 00:16:16.144 "digest": "sha384", 00:16:16.144 "dhgroup": "null" 00:16:16.144 } 00:16:16.144 } 00:16:16.144 ]' 00:16:16.144 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.144 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:16.144 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.402 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:16.403 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.403 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.403 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.403 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.403 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:16:16.403 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:16:17.338 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.338 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.338 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.338 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.338 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.338 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.338 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:17.338 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:17.338 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:17.338 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.338 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:17.338 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:17.338 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:17.338 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.338 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.338 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.338 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.338 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.338 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.338 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.338 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.597 00:16:17.597 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.597 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.597 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.855 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.855 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.856 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.856 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.856 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.856 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.856 { 00:16:17.856 "cntlid": 51, 00:16:17.856 "qid": 0, 00:16:17.856 "state": "enabled", 00:16:17.856 "thread": "nvmf_tgt_poll_group_000", 00:16:17.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:17.856 "listen_address": { 00:16:17.856 "trtype": "TCP", 00:16:17.856 "adrfam": "IPv4", 00:16:17.856 "traddr": "10.0.0.2", 00:16:17.856 "trsvcid": "4420" 00:16:17.856 }, 00:16:17.856 "peer_address": { 00:16:17.856 "trtype": "TCP", 00:16:17.856 "adrfam": "IPv4", 00:16:17.856 "traddr": "10.0.0.1", 00:16:17.856 "trsvcid": "49490" 00:16:17.856 }, 00:16:17.856 "auth": { 00:16:17.856 "state": "completed", 00:16:17.856 "digest": "sha384", 00:16:17.856 "dhgroup": "null" 00:16:17.856 } 00:16:17.856 } 00:16:17.856 ]' 00:16:17.856 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.856 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:17.856 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.856 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:17.856 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.856 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.856 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.856 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.115 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:16:18.115 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:16:18.682 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.683 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.683 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.683 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.683 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.683 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.683 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:18.683 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:18.942 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:18.942 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.942 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:18.942 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:18.942 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:18.942 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.942 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.942 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.942 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.942 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.942 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.942 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.942 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.200 00:16:19.200 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.200 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.200 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.459 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.459 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.459 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.459 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.459 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.459 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.459 { 00:16:19.459 "cntlid": 53, 00:16:19.459 "qid": 0, 00:16:19.459 "state": "enabled", 00:16:19.459 "thread": "nvmf_tgt_poll_group_000", 00:16:19.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:19.459 "listen_address": { 00:16:19.459 "trtype": "TCP", 00:16:19.459 "adrfam": "IPv4", 00:16:19.459 "traddr": "10.0.0.2", 00:16:19.459 "trsvcid": "4420" 00:16:19.459 }, 00:16:19.459 "peer_address": { 00:16:19.459 "trtype": "TCP", 00:16:19.459 "adrfam": "IPv4", 00:16:19.459 "traddr": "10.0.0.1", 00:16:19.459 "trsvcid": "49504" 00:16:19.459 }, 00:16:19.459 "auth": { 00:16:19.459 "state": "completed", 00:16:19.459 "digest": "sha384", 00:16:19.459 "dhgroup": "null" 00:16:19.459 } 00:16:19.459 } 00:16:19.459 ]' 00:16:19.459 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.459 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:19.459 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.459 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:19.459 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.459 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.459 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.459 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.717 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:16:19.717 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:16:20.284 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.284 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:20.284 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.284 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.284 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.284 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.284 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:20.284 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:20.542 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:20.542 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.542 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:20.542 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:20.542 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:20.542 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.542 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:20.542 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.542 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.542 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.542 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:20.542 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.542 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.800 00:16:20.801 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.801 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.801 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.059 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.059 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.059 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.059 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.059 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.059 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.059 { 00:16:21.059 "cntlid": 55, 00:16:21.059 "qid": 0, 00:16:21.059 "state": "enabled", 00:16:21.059 "thread": "nvmf_tgt_poll_group_000", 00:16:21.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:21.059 "listen_address": { 00:16:21.059 "trtype": "TCP", 00:16:21.059 "adrfam": "IPv4", 00:16:21.059 "traddr": "10.0.0.2", 00:16:21.059 "trsvcid": "4420" 00:16:21.059 }, 00:16:21.059 "peer_address": { 00:16:21.059 "trtype": "TCP", 00:16:21.059 "adrfam": "IPv4", 00:16:21.059 "traddr": "10.0.0.1", 00:16:21.059 "trsvcid": "56416" 00:16:21.059 }, 00:16:21.059 "auth": { 00:16:21.059 "state": "completed", 00:16:21.059 "digest": "sha384", 00:16:21.059 "dhgroup": "null" 00:16:21.059 } 00:16:21.059 } 00:16:21.059 ]' 00:16:21.059 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.059 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.059 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.060 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:21.060 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.060 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.060 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.060 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.318 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:16:21.318 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:16:21.884 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.884 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.884 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.884 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.884 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.884 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.884 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.884 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:21.884 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:22.143 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:22.143 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.143 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:22.143 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:22.143 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:22.143 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.143 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.143 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.143 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.143 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.143 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.143 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.143 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.402 00:16:22.402 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.402 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.402 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.402 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.402 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.402 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.402 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.660 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.660 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.660 { 00:16:22.660 "cntlid": 57, 00:16:22.660 "qid": 0, 00:16:22.660 "state": "enabled", 00:16:22.660 "thread": "nvmf_tgt_poll_group_000", 00:16:22.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:22.660 "listen_address": { 00:16:22.661 "trtype": "TCP", 00:16:22.661 "adrfam": "IPv4", 00:16:22.661 "traddr": "10.0.0.2", 00:16:22.661 "trsvcid": "4420" 00:16:22.661 }, 00:16:22.661 "peer_address": { 00:16:22.661 "trtype": "TCP", 00:16:22.661 "adrfam": "IPv4", 00:16:22.661 "traddr": "10.0.0.1", 00:16:22.661 "trsvcid": "56436" 00:16:22.661 }, 00:16:22.661 "auth": { 00:16:22.661 "state": "completed", 00:16:22.661 "digest": "sha384", 00:16:22.661 "dhgroup": "ffdhe2048" 00:16:22.661 } 00:16:22.661 } 00:16:22.661 ]' 00:16:22.661 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.661 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:22.661 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.661 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:22.661 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.661 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.661 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.661 13:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.919 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:16:22.919 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:16:23.486 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.486 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:23.486 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.486 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.486 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.486 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.486 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:23.486 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:23.745 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:23.745 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.745 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:23.745 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:23.745 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:23.745 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.745 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.745 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.745 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.745 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.746 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.746 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.746 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.004 00:16:24.004 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.004 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.004 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.264 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.264 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.264 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.264 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.264 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.264 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.264 { 00:16:24.264 "cntlid": 59, 00:16:24.264 "qid": 0, 00:16:24.264 "state": "enabled", 00:16:24.264 "thread": "nvmf_tgt_poll_group_000", 00:16:24.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:24.264 "listen_address": { 00:16:24.264 "trtype": "TCP", 00:16:24.264 "adrfam": "IPv4", 00:16:24.264 "traddr": "10.0.0.2", 00:16:24.264 "trsvcid": "4420" 00:16:24.264 }, 00:16:24.264 "peer_address": { 00:16:24.264 "trtype": "TCP", 00:16:24.264 "adrfam": "IPv4", 00:16:24.264 "traddr": "10.0.0.1", 00:16:24.264 "trsvcid": "56452" 00:16:24.264 }, 00:16:24.264 "auth": { 00:16:24.264 "state": "completed", 00:16:24.264 "digest": "sha384", 00:16:24.264 "dhgroup": "ffdhe2048" 00:16:24.264 } 00:16:24.264 } 00:16:24.264 ]' 00:16:24.264 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.264 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.264 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.264 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:24.264 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.264 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.264 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.264 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.522 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:16:24.522 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:16:25.088 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.088 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:25.088 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.088 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.088 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.088 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.088 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:25.088 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:25.347 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:25.347 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.347 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:25.347 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:25.347 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:25.347 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.347 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.347 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.347 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.347 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.347 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.347 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.347 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.606 00:16:25.606 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.606 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.606 13:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.864 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.864 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.864 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.864 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.864 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.864 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.864 { 00:16:25.864 "cntlid": 61, 00:16:25.864 "qid": 0, 00:16:25.864 "state": "enabled", 00:16:25.864 "thread": "nvmf_tgt_poll_group_000", 00:16:25.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:25.864 "listen_address": { 00:16:25.864 "trtype": "TCP", 00:16:25.864 "adrfam": "IPv4", 00:16:25.864 "traddr": "10.0.0.2", 00:16:25.864 "trsvcid": "4420" 00:16:25.864 }, 00:16:25.864 "peer_address": { 00:16:25.864 "trtype": "TCP", 00:16:25.864 "adrfam": "IPv4", 00:16:25.865 "traddr": "10.0.0.1", 00:16:25.865 "trsvcid": "56480" 00:16:25.865 }, 00:16:25.865 "auth": { 00:16:25.865 "state": "completed", 00:16:25.865 "digest": "sha384", 00:16:25.865 "dhgroup": "ffdhe2048" 00:16:25.865 } 00:16:25.865 } 00:16:25.865 ]' 00:16:25.865 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.865 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:25.865 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.865 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:25.865 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.865 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.865 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.865 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.123 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:16:26.123 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:16:26.691 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.691 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.691 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.691 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.691 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.691 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.691 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:26.691 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:26.950 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:26.950 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.950 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:26.950 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:26.950 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:26.950 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.950 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:26.950 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.950 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.950 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.950 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:26.950 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.950 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.209 00:16:27.209 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.209 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.209 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.468 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.468 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.468 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.468 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.468 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.468 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.468 { 00:16:27.468 "cntlid": 63, 00:16:27.468 "qid": 0, 00:16:27.468 "state": "enabled", 00:16:27.468 "thread": "nvmf_tgt_poll_group_000", 00:16:27.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:27.468 "listen_address": { 00:16:27.468 "trtype": "TCP", 00:16:27.468 "adrfam": "IPv4", 00:16:27.468 "traddr": "10.0.0.2", 00:16:27.468 "trsvcid": "4420" 00:16:27.468 }, 00:16:27.468 "peer_address": { 00:16:27.468 "trtype": "TCP", 00:16:27.468 "adrfam": "IPv4", 00:16:27.468 "traddr": "10.0.0.1", 00:16:27.468 "trsvcid": "56516" 00:16:27.468 }, 00:16:27.468 "auth": { 00:16:27.468 "state": "completed", 00:16:27.468 "digest": "sha384", 00:16:27.468 "dhgroup": "ffdhe2048" 00:16:27.468 } 00:16:27.468 } 00:16:27.468 ]' 00:16:27.468 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.468 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.468 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.468 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:27.468 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.468 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.468 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.468 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.726 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:16:27.726 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:16:28.293 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.293 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.293 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.293 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.293 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.293 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.293 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.293 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:28.293 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:28.552 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:28.552 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.552 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:28.552 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:28.552 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:28.552 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.552 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.552 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.552 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.552 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.552 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.552 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.552 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.810 00:16:28.810 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.810 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.810 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.069 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.069 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.069 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.069 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.069 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.069 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.069 { 00:16:29.069 "cntlid": 65, 00:16:29.069 "qid": 0, 00:16:29.069 "state": "enabled", 00:16:29.069 "thread": "nvmf_tgt_poll_group_000", 00:16:29.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:29.069 "listen_address": { 00:16:29.069 "trtype": "TCP", 00:16:29.069 "adrfam": "IPv4", 00:16:29.069 "traddr": "10.0.0.2", 00:16:29.069 "trsvcid": "4420" 00:16:29.069 }, 00:16:29.069 "peer_address": { 00:16:29.069 "trtype": "TCP", 00:16:29.069 "adrfam": "IPv4", 00:16:29.069 "traddr": "10.0.0.1", 00:16:29.069 "trsvcid": "56554" 00:16:29.069 }, 00:16:29.069 "auth": { 00:16:29.069 "state": "completed", 00:16:29.069 "digest": "sha384", 00:16:29.069 "dhgroup": "ffdhe3072" 00:16:29.069 } 00:16:29.069 } 00:16:29.069 ]' 00:16:29.069 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.069 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:29.069 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.069 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:29.069 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.069 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.069 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.069 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.328 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:16:29.328 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:16:29.895 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.895 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.895 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.895 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.895 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.895 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.895 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:29.895 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:30.155 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:30.155 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.155 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:30.155 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:30.155 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:30.155 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.155 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.155 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.155 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.155 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.155 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.155 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.155 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.414 00:16:30.414 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.414 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.414 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.673 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.673 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.673 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.673 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.673 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.673 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.673 { 00:16:30.673 "cntlid": 67, 00:16:30.673 "qid": 0, 00:16:30.673 "state": "enabled", 00:16:30.673 "thread": "nvmf_tgt_poll_group_000", 00:16:30.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:30.673 "listen_address": { 00:16:30.673 "trtype": "TCP", 00:16:30.673 "adrfam": "IPv4", 00:16:30.673 "traddr": "10.0.0.2", 00:16:30.673 "trsvcid": "4420" 00:16:30.673 }, 00:16:30.673 "peer_address": { 00:16:30.673 "trtype": "TCP", 00:16:30.673 "adrfam": "IPv4", 00:16:30.673 "traddr": "10.0.0.1", 00:16:30.673 "trsvcid": "56586" 00:16:30.673 }, 00:16:30.673 "auth": { 00:16:30.673 "state": "completed", 00:16:30.673 "digest": "sha384", 00:16:30.673 "dhgroup": "ffdhe3072" 00:16:30.673 } 00:16:30.673 } 00:16:30.673 ]' 00:16:30.673 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.673 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:30.673 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.673 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:30.673 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.673 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.673 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.673 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.932 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:16:30.932 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:16:31.499 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.499 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:31.499 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.499 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.499 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.499 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.499 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:31.499 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:31.757 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:31.757 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.757 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:31.757 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:31.757 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:31.757 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.757 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.757 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.757 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.757 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.757 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.757 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.757 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.015 00:16:32.015 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.015 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.015 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.274 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.274 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.274 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.274 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.274 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.274 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.274 { 00:16:32.274 "cntlid": 69, 00:16:32.274 "qid": 0, 00:16:32.274 "state": "enabled", 00:16:32.274 "thread": "nvmf_tgt_poll_group_000", 00:16:32.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:32.274 "listen_address": { 00:16:32.274 "trtype": "TCP", 00:16:32.274 "adrfam": "IPv4", 00:16:32.274 "traddr": "10.0.0.2", 00:16:32.274 "trsvcid": "4420" 00:16:32.274 }, 00:16:32.274 "peer_address": { 00:16:32.274 "trtype": "TCP", 00:16:32.274 "adrfam": "IPv4", 00:16:32.274 "traddr": "10.0.0.1", 00:16:32.274 "trsvcid": "40340" 00:16:32.274 }, 00:16:32.274 "auth": { 00:16:32.274 "state": "completed", 00:16:32.274 "digest": "sha384", 00:16:32.274 "dhgroup": "ffdhe3072" 00:16:32.274 } 00:16:32.274 } 00:16:32.274 ]' 00:16:32.274 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.274 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:32.274 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.274 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:32.274 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.274 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.274 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.274 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.532 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:16:32.532 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:16:33.100 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.100 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.100 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.100 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.100 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.100 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.100 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:33.100 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:33.359 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:33.359 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.359 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:33.359 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:33.359 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:33.359 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.359 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:33.359 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.359 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.359 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.359 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:33.359 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.359 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.617 00:16:33.617 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.617 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.617 13:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.875 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.875 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.875 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.875 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.875 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.875 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.875 { 00:16:33.875 "cntlid": 71, 00:16:33.875 "qid": 0, 00:16:33.875 "state": "enabled", 00:16:33.875 "thread": "nvmf_tgt_poll_group_000", 00:16:33.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:33.875 "listen_address": { 00:16:33.875 "trtype": "TCP", 00:16:33.875 "adrfam": "IPv4", 00:16:33.875 "traddr": "10.0.0.2", 00:16:33.875 "trsvcid": "4420" 00:16:33.875 }, 00:16:33.875 "peer_address": { 00:16:33.875 "trtype": "TCP", 00:16:33.875 "adrfam": "IPv4", 00:16:33.875 "traddr": "10.0.0.1", 00:16:33.875 "trsvcid": "40370" 00:16:33.875 }, 00:16:33.875 "auth": { 00:16:33.875 "state": "completed", 00:16:33.875 "digest": "sha384", 00:16:33.875 "dhgroup": "ffdhe3072" 00:16:33.875 } 00:16:33.875 } 00:16:33.875 ]' 00:16:33.875 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.875 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:33.876 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.876 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:33.876 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.876 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.876 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.876 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.134 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:16:34.134 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:16:34.700 13:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.700 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:34.700 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.700 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.700 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.700 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.700 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.700 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:34.700 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:34.959 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:34.959 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.959 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:34.959 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:34.959 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:34.959 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.959 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.959 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.959 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.959 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.959 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.959 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.959 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.218 00:16:35.218 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.218 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.218 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.477 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.477 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.477 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.477 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.477 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.477 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.477 { 00:16:35.477 "cntlid": 73, 00:16:35.477 "qid": 0, 00:16:35.477 "state": "enabled", 00:16:35.477 "thread": "nvmf_tgt_poll_group_000", 00:16:35.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:35.477 "listen_address": { 00:16:35.477 "trtype": "TCP", 00:16:35.477 "adrfam": "IPv4", 00:16:35.477 "traddr": "10.0.0.2", 00:16:35.477 "trsvcid": "4420" 00:16:35.477 }, 00:16:35.477 "peer_address": { 00:16:35.477 "trtype": "TCP", 00:16:35.477 "adrfam": "IPv4", 00:16:35.477 "traddr": "10.0.0.1", 00:16:35.477 "trsvcid": "40400" 00:16:35.477 }, 00:16:35.477 "auth": { 00:16:35.477 "state": "completed", 00:16:35.477 "digest": "sha384", 00:16:35.477 "dhgroup": "ffdhe4096" 00:16:35.477 } 00:16:35.477 } 00:16:35.477 ]' 00:16:35.477 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.477 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:35.477 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.477 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:35.477 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.477 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.477 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.477 13:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.736 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:16:35.736 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:16:36.311 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.311 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.311 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.311 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.311 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.311 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.311 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:36.311 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:36.570 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:36.570 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.570 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:36.570 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:36.570 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:36.570 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.570 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.570 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.570 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.570 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.570 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.570 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.570 13:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.901 00:16:36.901 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.901 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.901 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.195 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.195 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.195 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.195 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.195 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.195 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.195 { 00:16:37.195 "cntlid": 75, 00:16:37.195 "qid": 0, 00:16:37.195 "state": "enabled", 00:16:37.195 "thread": "nvmf_tgt_poll_group_000", 00:16:37.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:37.195 "listen_address": { 00:16:37.195 "trtype": "TCP", 00:16:37.195 "adrfam": "IPv4", 00:16:37.195 "traddr": "10.0.0.2", 00:16:37.195 "trsvcid": "4420" 00:16:37.195 }, 00:16:37.195 "peer_address": { 00:16:37.195 "trtype": "TCP", 00:16:37.195 "adrfam": "IPv4", 00:16:37.195 "traddr": "10.0.0.1", 00:16:37.195 "trsvcid": "40414" 00:16:37.195 }, 00:16:37.195 "auth": { 00:16:37.195 "state": "completed", 00:16:37.195 "digest": "sha384", 00:16:37.195 "dhgroup": "ffdhe4096" 00:16:37.195 } 00:16:37.195 } 00:16:37.195 ]' 00:16:37.195 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.195 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.195 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.195 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:37.195 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.196 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.196 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.196 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.535 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:16:37.535 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:16:38.103 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.103 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.103 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.103 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.103 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.103 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.103 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:38.103 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:38.363 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:38.363 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.363 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:38.363 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:38.363 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:38.363 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.363 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.363 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.363 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.363 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.363 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.363 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.363 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.622 00:16:38.622 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.622 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.622 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.622 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.622 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.622 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.622 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.882 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.882 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.882 { 00:16:38.882 "cntlid": 77, 00:16:38.882 "qid": 0, 00:16:38.882 "state": "enabled", 00:16:38.882 "thread": "nvmf_tgt_poll_group_000", 00:16:38.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:38.882 "listen_address": { 00:16:38.882 "trtype": "TCP", 00:16:38.882 "adrfam": "IPv4", 00:16:38.882 "traddr": "10.0.0.2", 00:16:38.882 "trsvcid": "4420" 00:16:38.882 }, 00:16:38.882 "peer_address": { 00:16:38.882 "trtype": "TCP", 00:16:38.882 "adrfam": "IPv4", 00:16:38.882 "traddr": "10.0.0.1", 00:16:38.882 "trsvcid": "40438" 00:16:38.882 }, 00:16:38.882 "auth": { 00:16:38.882 "state": "completed", 00:16:38.882 "digest": "sha384", 00:16:38.882 "dhgroup": "ffdhe4096" 00:16:38.882 } 00:16:38.882 } 00:16:38.882 ]' 00:16:38.882 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.882 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.882 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.882 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:38.882 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.882 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.882 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.882 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.141 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:16:39.141 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:16:39.708 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.709 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.709 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.709 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.709 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.709 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.709 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:39.709 13:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:39.968 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:39.968 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.968 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:39.968 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:39.968 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:39.968 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.968 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:39.968 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.968 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.968 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.968 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:39.968 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.968 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.227 00:16:40.227 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.227 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.227 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.486 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.486 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.486 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.486 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.486 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.486 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.486 { 00:16:40.486 "cntlid": 79, 00:16:40.486 "qid": 0, 00:16:40.486 "state": "enabled", 00:16:40.486 "thread": "nvmf_tgt_poll_group_000", 00:16:40.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:40.486 "listen_address": { 00:16:40.486 "trtype": "TCP", 00:16:40.486 "adrfam": "IPv4", 00:16:40.487 "traddr": "10.0.0.2", 00:16:40.487 "trsvcid": "4420" 00:16:40.487 }, 00:16:40.487 "peer_address": { 00:16:40.487 "trtype": "TCP", 00:16:40.487 "adrfam": "IPv4", 00:16:40.487 "traddr": "10.0.0.1", 00:16:40.487 "trsvcid": "40464" 00:16:40.487 }, 00:16:40.487 "auth": { 00:16:40.487 "state": "completed", 00:16:40.487 "digest": "sha384", 00:16:40.487 "dhgroup": "ffdhe4096" 00:16:40.487 } 00:16:40.487 } 00:16:40.487 ]' 00:16:40.487 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.487 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.487 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.487 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:40.487 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.487 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.487 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.487 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.746 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:16:40.746 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:16:41.315 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.315 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.315 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.315 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.315 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.315 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.315 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.315 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:41.315 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:41.574 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:41.574 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.575 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:41.575 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:41.575 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:41.575 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.575 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.575 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.575 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.575 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.575 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.575 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.575 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.833 00:16:41.833 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.833 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.833 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.091 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.091 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.091 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.091 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.091 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.091 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.091 { 00:16:42.091 "cntlid": 81, 00:16:42.091 "qid": 0, 00:16:42.091 "state": "enabled", 00:16:42.091 "thread": "nvmf_tgt_poll_group_000", 00:16:42.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:42.091 "listen_address": { 00:16:42.091 "trtype": "TCP", 00:16:42.091 "adrfam": "IPv4", 00:16:42.091 "traddr": "10.0.0.2", 00:16:42.091 "trsvcid": "4420" 00:16:42.091 }, 00:16:42.091 "peer_address": { 00:16:42.091 "trtype": "TCP", 00:16:42.091 "adrfam": "IPv4", 00:16:42.091 "traddr": "10.0.0.1", 00:16:42.091 "trsvcid": "56490" 00:16:42.091 }, 00:16:42.091 "auth": { 00:16:42.091 "state": "completed", 00:16:42.091 "digest": "sha384", 00:16:42.091 "dhgroup": "ffdhe6144" 00:16:42.091 } 00:16:42.091 } 00:16:42.091 ]' 00:16:42.091 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.091 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.091 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.091 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:42.091 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.091 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.091 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.091 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.350 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:16:42.350 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:16:42.918 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.918 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:42.918 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.918 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.918 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.918 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.918 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:42.918 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:43.177 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:43.177 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.177 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:43.177 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:43.177 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:43.177 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.177 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.177 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.177 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.177 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.177 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.177 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.177 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.436 00:16:43.695 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.695 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.695 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.695 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.695 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.696 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.696 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.696 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.696 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.696 { 00:16:43.696 "cntlid": 83, 00:16:43.696 "qid": 0, 00:16:43.696 "state": "enabled", 00:16:43.696 "thread": "nvmf_tgt_poll_group_000", 00:16:43.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:43.696 "listen_address": { 00:16:43.696 "trtype": "TCP", 00:16:43.696 "adrfam": "IPv4", 00:16:43.696 "traddr": "10.0.0.2", 00:16:43.696 "trsvcid": "4420" 00:16:43.696 }, 00:16:43.696 "peer_address": { 00:16:43.696 "trtype": "TCP", 00:16:43.696 "adrfam": "IPv4", 00:16:43.696 "traddr": "10.0.0.1", 00:16:43.696 "trsvcid": "56518" 00:16:43.696 }, 00:16:43.696 "auth": { 00:16:43.696 "state": "completed", 00:16:43.696 "digest": "sha384", 00:16:43.696 "dhgroup": "ffdhe6144" 00:16:43.696 } 00:16:43.696 } 00:16:43.696 ]' 00:16:43.696 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.696 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:43.696 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.954 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:43.954 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.954 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.954 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.954 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.954 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:16:43.954 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:16:44.521 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.521 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.521 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.521 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.780 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.780 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.780 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:44.780 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:44.780 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:44.780 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.780 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:44.780 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:44.780 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:44.780 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.780 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.780 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.780 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.780 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.780 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.780 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.780 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.347 00:16:45.347 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.347 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.347 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.347 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.347 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.347 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.347 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.347 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.347 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.347 { 00:16:45.347 "cntlid": 85, 00:16:45.347 "qid": 0, 00:16:45.347 "state": "enabled", 00:16:45.347 "thread": "nvmf_tgt_poll_group_000", 00:16:45.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:45.347 "listen_address": { 00:16:45.347 "trtype": "TCP", 00:16:45.347 "adrfam": "IPv4", 00:16:45.347 "traddr": "10.0.0.2", 00:16:45.347 "trsvcid": "4420" 00:16:45.347 }, 00:16:45.347 "peer_address": { 00:16:45.347 "trtype": "TCP", 00:16:45.347 "adrfam": "IPv4", 00:16:45.347 "traddr": "10.0.0.1", 00:16:45.347 "trsvcid": "56548" 00:16:45.347 }, 00:16:45.347 "auth": { 00:16:45.347 "state": "completed", 00:16:45.347 "digest": "sha384", 00:16:45.347 "dhgroup": "ffdhe6144" 00:16:45.347 } 00:16:45.347 } 00:16:45.347 ]' 00:16:45.347 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.606 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:45.606 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.606 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:45.606 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.606 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.606 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.606 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.865 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:16:45.865 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:16:46.432 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.432 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.432 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.432 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.432 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.432 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.432 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:46.432 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:46.691 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:46.691 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.691 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:46.691 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:46.691 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:46.691 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.691 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:46.691 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.691 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.691 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.691 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:46.691 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.691 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.950 00:16:46.950 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.950 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.950 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.209 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.209 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.209 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.209 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.209 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.209 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.209 { 00:16:47.209 "cntlid": 87, 00:16:47.209 "qid": 0, 00:16:47.209 "state": "enabled", 00:16:47.209 "thread": "nvmf_tgt_poll_group_000", 00:16:47.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:47.209 "listen_address": { 00:16:47.209 "trtype": "TCP", 00:16:47.209 "adrfam": "IPv4", 00:16:47.209 "traddr": "10.0.0.2", 00:16:47.209 "trsvcid": "4420" 00:16:47.209 }, 00:16:47.209 "peer_address": { 00:16:47.209 "trtype": "TCP", 00:16:47.209 "adrfam": "IPv4", 00:16:47.209 "traddr": "10.0.0.1", 00:16:47.209 "trsvcid": "56566" 00:16:47.209 }, 00:16:47.209 "auth": { 00:16:47.209 "state": "completed", 00:16:47.209 "digest": "sha384", 00:16:47.209 "dhgroup": "ffdhe6144" 00:16:47.209 } 00:16:47.209 } 00:16:47.209 ]' 00:16:47.209 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.209 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.209 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.209 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:47.209 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.209 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.209 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.209 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.468 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:16:47.468 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:16:48.036 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.036 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.036 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.036 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.036 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.036 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.036 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.036 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:48.036 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:48.295 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:48.295 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.295 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:48.296 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:48.296 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:48.296 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.296 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.296 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.296 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.296 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.296 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.296 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.296 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.864 00:16:48.864 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.864 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.864 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.123 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.123 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.123 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.123 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.124 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.124 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.124 { 00:16:49.124 "cntlid": 89, 00:16:49.124 "qid": 0, 00:16:49.124 "state": "enabled", 00:16:49.124 "thread": "nvmf_tgt_poll_group_000", 00:16:49.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:49.124 "listen_address": { 00:16:49.124 "trtype": "TCP", 00:16:49.124 "adrfam": "IPv4", 00:16:49.124 "traddr": "10.0.0.2", 00:16:49.124 "trsvcid": "4420" 00:16:49.124 }, 00:16:49.124 "peer_address": { 00:16:49.124 "trtype": "TCP", 00:16:49.124 "adrfam": "IPv4", 00:16:49.124 "traddr": "10.0.0.1", 00:16:49.124 "trsvcid": "56582" 00:16:49.124 }, 00:16:49.124 "auth": { 00:16:49.124 "state": "completed", 00:16:49.124 "digest": "sha384", 00:16:49.124 "dhgroup": "ffdhe8192" 00:16:49.124 } 00:16:49.124 } 00:16:49.124 ]' 00:16:49.124 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.124 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.124 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.124 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:49.124 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.124 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.124 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.124 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.383 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:16:49.383 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:16:49.952 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.952 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.952 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.952 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.952 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.952 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.952 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:49.952 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:50.211 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:50.211 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.211 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:50.211 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:50.211 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:50.211 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.211 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.211 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.211 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.211 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.211 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.211 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.211 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.779 00:16:50.779 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.779 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.779 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.779 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.779 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.779 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.779 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.779 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.779 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.779 { 00:16:50.779 "cntlid": 91, 00:16:50.779 "qid": 0, 00:16:50.779 "state": "enabled", 00:16:50.779 "thread": "nvmf_tgt_poll_group_000", 00:16:50.779 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:50.779 "listen_address": { 00:16:50.779 "trtype": "TCP", 00:16:50.779 "adrfam": "IPv4", 00:16:50.779 "traddr": "10.0.0.2", 00:16:50.779 "trsvcid": "4420" 00:16:50.779 }, 00:16:50.779 "peer_address": { 00:16:50.779 "trtype": "TCP", 00:16:50.779 "adrfam": "IPv4", 00:16:50.779 "traddr": "10.0.0.1", 00:16:50.779 "trsvcid": "56608" 00:16:50.779 }, 00:16:50.779 "auth": { 00:16:50.779 "state": "completed", 00:16:50.779 "digest": "sha384", 00:16:50.779 "dhgroup": "ffdhe8192" 00:16:50.779 } 00:16:50.779 } 00:16:50.779 ]' 00:16:50.779 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.038 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.038 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.038 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:51.038 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.038 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.038 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.038 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.297 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:16:51.297 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:16:51.865 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.865 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.865 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.865 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.865 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.865 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.865 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:51.865 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:52.124 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:52.124 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.124 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:52.124 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:52.124 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:52.124 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.124 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.124 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.124 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.124 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.124 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.125 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.125 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.383 00:16:52.643 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.643 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.643 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.643 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.643 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.643 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.643 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.643 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.643 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.643 { 00:16:52.643 "cntlid": 93, 00:16:52.643 "qid": 0, 00:16:52.643 "state": "enabled", 00:16:52.643 "thread": "nvmf_tgt_poll_group_000", 00:16:52.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:52.643 "listen_address": { 00:16:52.643 "trtype": "TCP", 00:16:52.643 "adrfam": "IPv4", 00:16:52.643 "traddr": "10.0.0.2", 00:16:52.643 "trsvcid": "4420" 00:16:52.643 }, 00:16:52.643 "peer_address": { 00:16:52.643 "trtype": "TCP", 00:16:52.643 "adrfam": "IPv4", 00:16:52.643 "traddr": "10.0.0.1", 00:16:52.643 "trsvcid": "50148" 00:16:52.643 }, 00:16:52.643 "auth": { 00:16:52.643 "state": "completed", 00:16:52.643 "digest": "sha384", 00:16:52.643 "dhgroup": "ffdhe8192" 00:16:52.643 } 00:16:52.643 } 00:16:52.643 ]' 00:16:52.643 13:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.643 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.643 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.903 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:52.903 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.903 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.903 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.903 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.162 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:16:53.162 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:16:53.730 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.730 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.730 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.730 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.730 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.730 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.730 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:53.730 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:53.730 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:53.730 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.730 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:53.730 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:53.730 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:53.730 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.730 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:53.730 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.730 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.730 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.730 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:53.730 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.730 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.297 00:16:54.297 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.297 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.297 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.555 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.555 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.555 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.555 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.555 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.555 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.555 { 00:16:54.555 "cntlid": 95, 00:16:54.555 "qid": 0, 00:16:54.555 "state": "enabled", 00:16:54.555 "thread": "nvmf_tgt_poll_group_000", 00:16:54.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:54.555 "listen_address": { 00:16:54.555 "trtype": "TCP", 00:16:54.555 "adrfam": "IPv4", 00:16:54.555 "traddr": "10.0.0.2", 00:16:54.555 "trsvcid": "4420" 00:16:54.555 }, 00:16:54.555 "peer_address": { 00:16:54.555 "trtype": "TCP", 00:16:54.555 "adrfam": "IPv4", 00:16:54.555 "traddr": "10.0.0.1", 00:16:54.555 "trsvcid": "50168" 00:16:54.556 }, 00:16:54.556 "auth": { 00:16:54.556 "state": "completed", 00:16:54.556 "digest": "sha384", 00:16:54.556 "dhgroup": "ffdhe8192" 00:16:54.556 } 00:16:54.556 } 00:16:54.556 ]' 00:16:54.556 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.556 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.556 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.556 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:54.556 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.556 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.556 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.556 13:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.815 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:16:54.815 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:16:55.383 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.383 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.383 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.383 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.383 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.383 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:55.383 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.383 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.383 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:55.383 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:55.643 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:55.643 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.643 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:55.643 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:55.643 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:55.643 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.643 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.643 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.643 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.643 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.643 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.643 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.643 13:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.902 00:16:55.902 13:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.902 13:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.902 13:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.160 13:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.161 13:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.161 13:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.161 13:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.161 13:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.161 13:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.161 { 00:16:56.161 "cntlid": 97, 00:16:56.161 "qid": 0, 00:16:56.161 "state": "enabled", 00:16:56.161 "thread": "nvmf_tgt_poll_group_000", 00:16:56.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:56.161 "listen_address": { 00:16:56.161 "trtype": "TCP", 00:16:56.161 "adrfam": "IPv4", 00:16:56.161 "traddr": "10.0.0.2", 00:16:56.161 "trsvcid": "4420" 00:16:56.161 }, 00:16:56.161 "peer_address": { 00:16:56.161 "trtype": "TCP", 00:16:56.161 "adrfam": "IPv4", 00:16:56.161 "traddr": "10.0.0.1", 00:16:56.161 "trsvcid": "50200" 00:16:56.161 }, 00:16:56.161 "auth": { 00:16:56.161 "state": "completed", 00:16:56.161 "digest": "sha512", 00:16:56.161 "dhgroup": "null" 00:16:56.161 } 00:16:56.161 } 00:16:56.161 ]' 00:16:56.161 13:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.161 13:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.161 13:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.161 13:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:56.161 13:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.161 13:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.161 13:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.161 13:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.419 13:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:16:56.419 13:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:16:56.994 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.994 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.994 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.994 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.994 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.994 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.994 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:56.994 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:57.255 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:57.255 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.255 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:57.255 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:57.255 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:57.255 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.255 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.255 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.255 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.255 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.255 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.255 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.255 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.513 00:16:57.513 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.513 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.513 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.772 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.772 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.772 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.772 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.772 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.772 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.772 { 00:16:57.772 "cntlid": 99, 00:16:57.772 "qid": 0, 00:16:57.772 "state": "enabled", 00:16:57.772 "thread": "nvmf_tgt_poll_group_000", 00:16:57.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:57.772 "listen_address": { 00:16:57.772 "trtype": "TCP", 00:16:57.772 "adrfam": "IPv4", 00:16:57.772 "traddr": "10.0.0.2", 00:16:57.772 "trsvcid": "4420" 00:16:57.772 }, 00:16:57.772 "peer_address": { 00:16:57.772 "trtype": "TCP", 00:16:57.772 "adrfam": "IPv4", 00:16:57.772 "traddr": "10.0.0.1", 00:16:57.772 "trsvcid": "50238" 00:16:57.772 }, 00:16:57.772 "auth": { 00:16:57.772 "state": "completed", 00:16:57.772 "digest": "sha512", 00:16:57.772 "dhgroup": "null" 00:16:57.772 } 00:16:57.772 } 00:16:57.772 ]' 00:16:57.772 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.772 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.772 13:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.772 13:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:57.772 13:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.772 13:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.772 13:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.772 13:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.030 13:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:16:58.030 13:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:16:58.596 13:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.596 13:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.596 13:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.596 13:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.596 13:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.596 13:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.596 13:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:58.596 13:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:58.854 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:58.854 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.854 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.854 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:58.854 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:58.854 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.854 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.854 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.854 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.854 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.854 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.854 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.854 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.113 00:16:59.113 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.113 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.113 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.373 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.373 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.373 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.373 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.373 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.373 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.373 { 00:16:59.373 "cntlid": 101, 00:16:59.373 "qid": 0, 00:16:59.373 "state": "enabled", 00:16:59.373 "thread": "nvmf_tgt_poll_group_000", 00:16:59.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:59.373 "listen_address": { 00:16:59.373 "trtype": "TCP", 00:16:59.373 "adrfam": "IPv4", 00:16:59.373 "traddr": "10.0.0.2", 00:16:59.373 "trsvcid": "4420" 00:16:59.373 }, 00:16:59.373 "peer_address": { 00:16:59.373 "trtype": "TCP", 00:16:59.373 "adrfam": "IPv4", 00:16:59.373 "traddr": "10.0.0.1", 00:16:59.373 "trsvcid": "50250" 00:16:59.373 }, 00:16:59.373 "auth": { 00:16:59.373 "state": "completed", 00:16:59.373 "digest": "sha512", 00:16:59.373 "dhgroup": "null" 00:16:59.373 } 00:16:59.373 } 00:16:59.373 ]' 00:16:59.373 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.373 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.373 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.373 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:59.373 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.373 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.373 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.373 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.632 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:16:59.632 13:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:17:00.198 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.199 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.199 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.199 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.199 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.199 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.199 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:00.199 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:00.457 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:00.457 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.457 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:00.457 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:00.457 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:00.457 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.457 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:00.457 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.457 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.457 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.457 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:00.457 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.457 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.715 00:17:00.715 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.715 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.715 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.973 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.974 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.974 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.974 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.974 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.974 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.974 { 00:17:00.974 "cntlid": 103, 00:17:00.974 "qid": 0, 00:17:00.974 "state": "enabled", 00:17:00.974 "thread": "nvmf_tgt_poll_group_000", 00:17:00.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:00.974 "listen_address": { 00:17:00.974 "trtype": "TCP", 00:17:00.974 "adrfam": "IPv4", 00:17:00.974 "traddr": "10.0.0.2", 00:17:00.974 "trsvcid": "4420" 00:17:00.974 }, 00:17:00.974 "peer_address": { 00:17:00.974 "trtype": "TCP", 00:17:00.974 "adrfam": "IPv4", 00:17:00.974 "traddr": "10.0.0.1", 00:17:00.974 "trsvcid": "52878" 00:17:00.974 }, 00:17:00.974 "auth": { 00:17:00.974 "state": "completed", 00:17:00.974 "digest": "sha512", 00:17:00.974 "dhgroup": "null" 00:17:00.974 } 00:17:00.974 } 00:17:00.974 ]' 00:17:00.974 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.974 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.974 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.974 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:00.974 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.974 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.974 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.974 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.238 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:17:01.239 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:17:01.862 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.862 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.862 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.862 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.862 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.862 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.862 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.862 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:01.863 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:02.121 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:02.121 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.121 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:02.121 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:02.121 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:02.121 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.121 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.121 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.121 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.121 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.121 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.121 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.121 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.381 00:17:02.381 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.381 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.381 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.381 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.381 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.381 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.381 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.381 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.381 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.381 { 00:17:02.381 "cntlid": 105, 00:17:02.381 "qid": 0, 00:17:02.381 "state": "enabled", 00:17:02.381 "thread": "nvmf_tgt_poll_group_000", 00:17:02.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:02.381 "listen_address": { 00:17:02.381 "trtype": "TCP", 00:17:02.381 "adrfam": "IPv4", 00:17:02.381 "traddr": "10.0.0.2", 00:17:02.381 "trsvcid": "4420" 00:17:02.381 }, 00:17:02.381 "peer_address": { 00:17:02.381 "trtype": "TCP", 00:17:02.381 "adrfam": "IPv4", 00:17:02.381 "traddr": "10.0.0.1", 00:17:02.381 "trsvcid": "52914" 00:17:02.381 }, 00:17:02.381 "auth": { 00:17:02.381 "state": "completed", 00:17:02.381 "digest": "sha512", 00:17:02.381 "dhgroup": "ffdhe2048" 00:17:02.381 } 00:17:02.381 } 00:17:02.381 ]' 00:17:02.381 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.640 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.640 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.640 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:02.640 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.640 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.640 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.640 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.899 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:17:02.899 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:17:03.467 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.467 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.467 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.467 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.467 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.467 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.467 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:03.467 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:03.727 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:03.727 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.727 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:03.727 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:03.727 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:03.727 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.727 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.727 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.727 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.727 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.727 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.727 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.727 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.986 00:17:03.986 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.986 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.986 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.986 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.987 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.987 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.987 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.987 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.987 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.987 { 00:17:03.987 "cntlid": 107, 00:17:03.987 "qid": 0, 00:17:03.987 "state": "enabled", 00:17:03.987 "thread": "nvmf_tgt_poll_group_000", 00:17:03.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:03.987 "listen_address": { 00:17:03.987 "trtype": "TCP", 00:17:03.987 "adrfam": "IPv4", 00:17:03.987 "traddr": "10.0.0.2", 00:17:03.987 "trsvcid": "4420" 00:17:03.987 }, 00:17:03.987 "peer_address": { 00:17:03.987 "trtype": "TCP", 00:17:03.987 "adrfam": "IPv4", 00:17:03.987 "traddr": "10.0.0.1", 00:17:03.987 "trsvcid": "52940" 00:17:03.987 }, 00:17:03.987 "auth": { 00:17:03.987 "state": "completed", 00:17:03.987 "digest": "sha512", 00:17:03.987 "dhgroup": "ffdhe2048" 00:17:03.987 } 00:17:03.987 } 00:17:03.987 ]' 00:17:03.987 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.246 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.246 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.246 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:04.246 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.246 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.246 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.246 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.504 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:17:04.504 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:17:05.072 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.072 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.072 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.072 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.072 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.072 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.072 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:05.072 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:05.331 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:05.331 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.331 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.331 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:05.331 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:05.331 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.331 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.331 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.331 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.331 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.331 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.331 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.331 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.590 00:17:05.590 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.590 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.590 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.590 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.590 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.590 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.590 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.590 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.590 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.590 { 00:17:05.590 "cntlid": 109, 00:17:05.590 "qid": 0, 00:17:05.590 "state": "enabled", 00:17:05.590 "thread": "nvmf_tgt_poll_group_000", 00:17:05.590 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:05.590 "listen_address": { 00:17:05.590 "trtype": "TCP", 00:17:05.590 "adrfam": "IPv4", 00:17:05.590 "traddr": "10.0.0.2", 00:17:05.590 "trsvcid": "4420" 00:17:05.590 }, 00:17:05.590 "peer_address": { 00:17:05.590 "trtype": "TCP", 00:17:05.590 "adrfam": "IPv4", 00:17:05.590 "traddr": "10.0.0.1", 00:17:05.590 "trsvcid": "52968" 00:17:05.590 }, 00:17:05.590 "auth": { 00:17:05.590 "state": "completed", 00:17:05.590 "digest": "sha512", 00:17:05.590 "dhgroup": "ffdhe2048" 00:17:05.590 } 00:17:05.590 } 00:17:05.590 ]' 00:17:05.590 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.847 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.847 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.847 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:05.847 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.847 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.847 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.847 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.104 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:17:06.104 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:17:06.670 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.670 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.670 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.670 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.670 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.670 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.670 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:06.670 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:06.670 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:06.670 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.670 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:06.670 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:06.670 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:06.670 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.670 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:06.670 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.670 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.670 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.670 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:06.670 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.671 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.928 00:17:06.928 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.928 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.928 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.185 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.185 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.185 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.185 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.185 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.185 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.185 { 00:17:07.185 "cntlid": 111, 00:17:07.185 "qid": 0, 00:17:07.185 "state": "enabled", 00:17:07.185 "thread": "nvmf_tgt_poll_group_000", 00:17:07.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:07.185 "listen_address": { 00:17:07.185 "trtype": "TCP", 00:17:07.185 "adrfam": "IPv4", 00:17:07.185 "traddr": "10.0.0.2", 00:17:07.185 "trsvcid": "4420" 00:17:07.185 }, 00:17:07.185 "peer_address": { 00:17:07.185 "trtype": "TCP", 00:17:07.185 "adrfam": "IPv4", 00:17:07.185 "traddr": "10.0.0.1", 00:17:07.185 "trsvcid": "52990" 00:17:07.185 }, 00:17:07.185 "auth": { 00:17:07.185 "state": "completed", 00:17:07.185 "digest": "sha512", 00:17:07.185 "dhgroup": "ffdhe2048" 00:17:07.185 } 00:17:07.185 } 00:17:07.185 ]' 00:17:07.185 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.185 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.185 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.442 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:07.442 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.442 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.442 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.443 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.443 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:17:07.443 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:17:08.009 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.009 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.009 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.009 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.267 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.267 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.267 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.267 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:08.267 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:08.267 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:08.267 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.267 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:08.267 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:08.267 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:08.267 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.267 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.267 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.267 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.267 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.267 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.267 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.267 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.526 00:17:08.526 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.526 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.526 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.785 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.785 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.785 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.785 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.785 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.785 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.785 { 00:17:08.785 "cntlid": 113, 00:17:08.785 "qid": 0, 00:17:08.785 "state": "enabled", 00:17:08.785 "thread": "nvmf_tgt_poll_group_000", 00:17:08.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:08.785 "listen_address": { 00:17:08.785 "trtype": "TCP", 00:17:08.785 "adrfam": "IPv4", 00:17:08.785 "traddr": "10.0.0.2", 00:17:08.785 "trsvcid": "4420" 00:17:08.785 }, 00:17:08.785 "peer_address": { 00:17:08.785 "trtype": "TCP", 00:17:08.785 "adrfam": "IPv4", 00:17:08.785 "traddr": "10.0.0.1", 00:17:08.785 "trsvcid": "53012" 00:17:08.785 }, 00:17:08.785 "auth": { 00:17:08.785 "state": "completed", 00:17:08.785 "digest": "sha512", 00:17:08.785 "dhgroup": "ffdhe3072" 00:17:08.785 } 00:17:08.785 } 00:17:08.785 ]' 00:17:08.785 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.785 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.785 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.785 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:08.785 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.044 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.044 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.044 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.044 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:17:09.044 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:17:09.610 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.610 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.610 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.610 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.868 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.868 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.868 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:09.868 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:09.868 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:09.868 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.868 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:09.868 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:09.868 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:09.868 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.868 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.868 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.868 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.868 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.868 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.868 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.868 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.126 00:17:10.126 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.126 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.126 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.385 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.385 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.385 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.385 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.385 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.385 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.385 { 00:17:10.385 "cntlid": 115, 00:17:10.385 "qid": 0, 00:17:10.385 "state": "enabled", 00:17:10.385 "thread": "nvmf_tgt_poll_group_000", 00:17:10.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:10.385 "listen_address": { 00:17:10.385 "trtype": "TCP", 00:17:10.385 "adrfam": "IPv4", 00:17:10.385 "traddr": "10.0.0.2", 00:17:10.385 "trsvcid": "4420" 00:17:10.385 }, 00:17:10.385 "peer_address": { 00:17:10.385 "trtype": "TCP", 00:17:10.385 "adrfam": "IPv4", 00:17:10.385 "traddr": "10.0.0.1", 00:17:10.385 "trsvcid": "53042" 00:17:10.385 }, 00:17:10.385 "auth": { 00:17:10.385 "state": "completed", 00:17:10.385 "digest": "sha512", 00:17:10.385 "dhgroup": "ffdhe3072" 00:17:10.385 } 00:17:10.385 } 00:17:10.385 ]' 00:17:10.385 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.385 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.385 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.385 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:10.385 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.644 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.644 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.644 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.644 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:17:10.644 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:17:11.210 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.468 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:11.468 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.468 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.468 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.468 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.468 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:11.468 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:11.468 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:11.468 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.468 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:11.468 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:11.468 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:11.468 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.468 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.468 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.468 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.468 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.468 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.468 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.468 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.726 00:17:11.726 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.726 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.726 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.985 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.985 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.985 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.985 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.985 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.985 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.985 { 00:17:11.985 "cntlid": 117, 00:17:11.985 "qid": 0, 00:17:11.985 "state": "enabled", 00:17:11.985 "thread": "nvmf_tgt_poll_group_000", 00:17:11.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:11.985 "listen_address": { 00:17:11.985 "trtype": "TCP", 00:17:11.985 "adrfam": "IPv4", 00:17:11.985 "traddr": "10.0.0.2", 00:17:11.985 "trsvcid": "4420" 00:17:11.985 }, 00:17:11.985 "peer_address": { 00:17:11.985 "trtype": "TCP", 00:17:11.985 "adrfam": "IPv4", 00:17:11.985 "traddr": "10.0.0.1", 00:17:11.985 "trsvcid": "42954" 00:17:11.985 }, 00:17:11.985 "auth": { 00:17:11.985 "state": "completed", 00:17:11.985 "digest": "sha512", 00:17:11.985 "dhgroup": "ffdhe3072" 00:17:11.985 } 00:17:11.985 } 00:17:11.985 ]' 00:17:11.985 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.985 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.985 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.243 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:12.243 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.243 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.243 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.243 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.502 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:17:12.502 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:17:13.069 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.069 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:13.069 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.069 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.069 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.069 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.070 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:13.070 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:13.070 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:13.070 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.070 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:13.070 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:13.070 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:13.070 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.070 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:13.070 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.070 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.327 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.327 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:13.327 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.327 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.327 00:17:13.586 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.586 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.586 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.586 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.586 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.586 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.586 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.586 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.586 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.586 { 00:17:13.586 "cntlid": 119, 00:17:13.586 "qid": 0, 00:17:13.586 "state": "enabled", 00:17:13.586 "thread": "nvmf_tgt_poll_group_000", 00:17:13.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:13.586 "listen_address": { 00:17:13.586 "trtype": "TCP", 00:17:13.586 "adrfam": "IPv4", 00:17:13.586 "traddr": "10.0.0.2", 00:17:13.586 "trsvcid": "4420" 00:17:13.586 }, 00:17:13.586 "peer_address": { 00:17:13.586 "trtype": "TCP", 00:17:13.586 "adrfam": "IPv4", 00:17:13.586 "traddr": "10.0.0.1", 00:17:13.586 "trsvcid": "42984" 00:17:13.586 }, 00:17:13.586 "auth": { 00:17:13.586 "state": "completed", 00:17:13.586 "digest": "sha512", 00:17:13.586 "dhgroup": "ffdhe3072" 00:17:13.586 } 00:17:13.586 } 00:17:13.586 ]' 00:17:13.586 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.586 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.586 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.862 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:13.862 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.862 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.862 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.862 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.120 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:17:14.120 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:17:14.688 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.689 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.689 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.689 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.689 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.689 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.689 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.689 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:14.689 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:14.689 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:14.689 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.689 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:14.689 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:14.689 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:14.689 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.689 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.689 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.689 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.689 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.689 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.689 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.689 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.947 00:17:15.206 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.206 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.206 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.207 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.207 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.207 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.207 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.207 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.207 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.207 { 00:17:15.207 "cntlid": 121, 00:17:15.207 "qid": 0, 00:17:15.207 "state": "enabled", 00:17:15.207 "thread": "nvmf_tgt_poll_group_000", 00:17:15.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:15.207 "listen_address": { 00:17:15.207 "trtype": "TCP", 00:17:15.207 "adrfam": "IPv4", 00:17:15.207 "traddr": "10.0.0.2", 00:17:15.207 "trsvcid": "4420" 00:17:15.207 }, 00:17:15.207 "peer_address": { 00:17:15.207 "trtype": "TCP", 00:17:15.207 "adrfam": "IPv4", 00:17:15.207 "traddr": "10.0.0.1", 00:17:15.207 "trsvcid": "43024" 00:17:15.207 }, 00:17:15.207 "auth": { 00:17:15.207 "state": "completed", 00:17:15.207 "digest": "sha512", 00:17:15.207 "dhgroup": "ffdhe4096" 00:17:15.207 } 00:17:15.207 } 00:17:15.207 ]' 00:17:15.207 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.465 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.465 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.465 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:15.465 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.465 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.465 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.465 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.724 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:17:15.724 13:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:17:16.288 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.288 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:16.288 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.288 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.288 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.288 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.288 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:16.288 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:16.547 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:16.547 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.547 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:16.547 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:16.547 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:16.547 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.547 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.547 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.547 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.547 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.547 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.547 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.547 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.805 00:17:16.805 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.805 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.806 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.064 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.064 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.064 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.064 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.064 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.064 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.064 { 00:17:17.064 "cntlid": 123, 00:17:17.064 "qid": 0, 00:17:17.064 "state": "enabled", 00:17:17.064 "thread": "nvmf_tgt_poll_group_000", 00:17:17.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:17.064 "listen_address": { 00:17:17.064 "trtype": "TCP", 00:17:17.064 "adrfam": "IPv4", 00:17:17.064 "traddr": "10.0.0.2", 00:17:17.064 "trsvcid": "4420" 00:17:17.064 }, 00:17:17.064 "peer_address": { 00:17:17.064 "trtype": "TCP", 00:17:17.064 "adrfam": "IPv4", 00:17:17.064 "traddr": "10.0.0.1", 00:17:17.064 "trsvcid": "43052" 00:17:17.064 }, 00:17:17.064 "auth": { 00:17:17.064 "state": "completed", 00:17:17.064 "digest": "sha512", 00:17:17.064 "dhgroup": "ffdhe4096" 00:17:17.064 } 00:17:17.064 } 00:17:17.064 ]' 00:17:17.064 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.064 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.064 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.064 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:17.064 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.064 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.064 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.064 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.321 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:17:17.321 13:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:17:17.887 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.887 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.887 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.887 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.887 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.887 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.887 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:17.887 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:18.145 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:18.145 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.145 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:18.145 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:18.145 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:18.145 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.145 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.145 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.145 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.145 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.145 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.145 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.145 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.403 00:17:18.403 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.403 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.403 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.661 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.661 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.661 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.661 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.661 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.661 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.661 { 00:17:18.661 "cntlid": 125, 00:17:18.661 "qid": 0, 00:17:18.661 "state": "enabled", 00:17:18.661 "thread": "nvmf_tgt_poll_group_000", 00:17:18.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:18.661 "listen_address": { 00:17:18.661 "trtype": "TCP", 00:17:18.661 "adrfam": "IPv4", 00:17:18.661 "traddr": "10.0.0.2", 00:17:18.661 "trsvcid": "4420" 00:17:18.661 }, 00:17:18.661 "peer_address": { 00:17:18.661 "trtype": "TCP", 00:17:18.661 "adrfam": "IPv4", 00:17:18.661 "traddr": "10.0.0.1", 00:17:18.661 "trsvcid": "43076" 00:17:18.661 }, 00:17:18.661 "auth": { 00:17:18.661 "state": "completed", 00:17:18.661 "digest": "sha512", 00:17:18.661 "dhgroup": "ffdhe4096" 00:17:18.661 } 00:17:18.661 } 00:17:18.661 ]' 00:17:18.661 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.662 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.662 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.662 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:18.662 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.662 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.662 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.662 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.920 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:17:18.920 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:17:19.486 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.486 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:19.486 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.486 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.486 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.486 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.486 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:19.486 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:19.745 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:19.745 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.745 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:19.745 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:19.745 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:19.745 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.745 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:19.745 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.745 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.745 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.745 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:19.745 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.745 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.003 00:17:20.003 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.003 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.003 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.263 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.263 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.263 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.263 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.263 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.263 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.263 { 00:17:20.263 "cntlid": 127, 00:17:20.263 "qid": 0, 00:17:20.263 "state": "enabled", 00:17:20.263 "thread": "nvmf_tgt_poll_group_000", 00:17:20.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:20.263 "listen_address": { 00:17:20.263 "trtype": "TCP", 00:17:20.263 "adrfam": "IPv4", 00:17:20.263 "traddr": "10.0.0.2", 00:17:20.263 "trsvcid": "4420" 00:17:20.263 }, 00:17:20.263 "peer_address": { 00:17:20.263 "trtype": "TCP", 00:17:20.263 "adrfam": "IPv4", 00:17:20.263 "traddr": "10.0.0.1", 00:17:20.263 "trsvcid": "43110" 00:17:20.263 }, 00:17:20.263 "auth": { 00:17:20.263 "state": "completed", 00:17:20.263 "digest": "sha512", 00:17:20.263 "dhgroup": "ffdhe4096" 00:17:20.263 } 00:17:20.263 } 00:17:20.263 ]' 00:17:20.263 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.263 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.263 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.263 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:20.263 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.263 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.263 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.263 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.522 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:17:20.522 13:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:17:21.089 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.089 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:21.089 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.089 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.089 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.089 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.089 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.089 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:21.089 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:21.347 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:21.347 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.347 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:21.347 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:21.347 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:21.347 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.347 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.347 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.347 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.347 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.347 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.347 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.348 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.606 00:17:21.606 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.606 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.606 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.865 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.865 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.865 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.865 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.865 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.865 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.865 { 00:17:21.865 "cntlid": 129, 00:17:21.865 "qid": 0, 00:17:21.865 "state": "enabled", 00:17:21.865 "thread": "nvmf_tgt_poll_group_000", 00:17:21.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:21.865 "listen_address": { 00:17:21.865 "trtype": "TCP", 00:17:21.865 "adrfam": "IPv4", 00:17:21.865 "traddr": "10.0.0.2", 00:17:21.865 "trsvcid": "4420" 00:17:21.865 }, 00:17:21.865 "peer_address": { 00:17:21.865 "trtype": "TCP", 00:17:21.865 "adrfam": "IPv4", 00:17:21.865 "traddr": "10.0.0.1", 00:17:21.865 "trsvcid": "41250" 00:17:21.865 }, 00:17:21.865 "auth": { 00:17:21.865 "state": "completed", 00:17:21.865 "digest": "sha512", 00:17:21.865 "dhgroup": "ffdhe6144" 00:17:21.865 } 00:17:21.865 } 00:17:21.865 ]' 00:17:21.865 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.865 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.865 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.124 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:22.124 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.124 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.124 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.124 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.124 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:17:22.124 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:17:22.689 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.948 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:22.948 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.948 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.948 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.948 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.948 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:22.948 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:22.948 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:22.948 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.948 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:22.948 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:22.948 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:22.948 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.948 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.948 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.948 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.948 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.948 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.948 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.948 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.515 00:17:23.515 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.515 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.515 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.515 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.515 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.515 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.515 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.515 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.515 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.515 { 00:17:23.515 "cntlid": 131, 00:17:23.515 "qid": 0, 00:17:23.515 "state": "enabled", 00:17:23.515 "thread": "nvmf_tgt_poll_group_000", 00:17:23.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:23.515 "listen_address": { 00:17:23.515 "trtype": "TCP", 00:17:23.515 "adrfam": "IPv4", 00:17:23.515 "traddr": "10.0.0.2", 00:17:23.515 "trsvcid": "4420" 00:17:23.515 }, 00:17:23.515 "peer_address": { 00:17:23.515 "trtype": "TCP", 00:17:23.515 "adrfam": "IPv4", 00:17:23.515 "traddr": "10.0.0.1", 00:17:23.515 "trsvcid": "41266" 00:17:23.515 }, 00:17:23.515 "auth": { 00:17:23.515 "state": "completed", 00:17:23.515 "digest": "sha512", 00:17:23.515 "dhgroup": "ffdhe6144" 00:17:23.515 } 00:17:23.515 } 00:17:23.515 ]' 00:17:23.515 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.515 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.515 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.774 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:23.774 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.774 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.774 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.774 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.032 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:17:24.032 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:17:24.599 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.599 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.599 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.599 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.599 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.599 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.599 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:24.599 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:24.599 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:24.599 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.599 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:24.599 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:24.599 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:24.599 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.599 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.599 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.599 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.599 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.857 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.858 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.858 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.116 00:17:25.116 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.116 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.116 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.375 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.375 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.375 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.375 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.375 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.375 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.375 { 00:17:25.375 "cntlid": 133, 00:17:25.375 "qid": 0, 00:17:25.375 "state": "enabled", 00:17:25.375 "thread": "nvmf_tgt_poll_group_000", 00:17:25.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:25.375 "listen_address": { 00:17:25.375 "trtype": "TCP", 00:17:25.375 "adrfam": "IPv4", 00:17:25.375 "traddr": "10.0.0.2", 00:17:25.375 "trsvcid": "4420" 00:17:25.375 }, 00:17:25.375 "peer_address": { 00:17:25.375 "trtype": "TCP", 00:17:25.375 "adrfam": "IPv4", 00:17:25.375 "traddr": "10.0.0.1", 00:17:25.375 "trsvcid": "41306" 00:17:25.375 }, 00:17:25.375 "auth": { 00:17:25.375 "state": "completed", 00:17:25.375 "digest": "sha512", 00:17:25.375 "dhgroup": "ffdhe6144" 00:17:25.375 } 00:17:25.375 } 00:17:25.375 ]' 00:17:25.375 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.375 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.375 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.375 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:25.375 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.375 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.375 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.375 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.634 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:17:25.634 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:17:26.200 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.201 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.201 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.201 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.201 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.201 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.201 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:26.201 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:26.459 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:26.459 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.459 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:26.459 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:26.459 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:26.459 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.459 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:26.459 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.459 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.459 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.459 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:26.459 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.459 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.718 00:17:26.718 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.718 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.718 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.976 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.976 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.976 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.976 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.976 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.976 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.976 { 00:17:26.976 "cntlid": 135, 00:17:26.976 "qid": 0, 00:17:26.976 "state": "enabled", 00:17:26.976 "thread": "nvmf_tgt_poll_group_000", 00:17:26.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:26.976 "listen_address": { 00:17:26.976 "trtype": "TCP", 00:17:26.976 "adrfam": "IPv4", 00:17:26.976 "traddr": "10.0.0.2", 00:17:26.976 "trsvcid": "4420" 00:17:26.976 }, 00:17:26.976 "peer_address": { 00:17:26.976 "trtype": "TCP", 00:17:26.976 "adrfam": "IPv4", 00:17:26.976 "traddr": "10.0.0.1", 00:17:26.976 "trsvcid": "41334" 00:17:26.976 }, 00:17:26.976 "auth": { 00:17:26.976 "state": "completed", 00:17:26.976 "digest": "sha512", 00:17:26.976 "dhgroup": "ffdhe6144" 00:17:26.976 } 00:17:26.976 } 00:17:26.976 ]' 00:17:26.976 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.976 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.976 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.976 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:26.976 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.976 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.976 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.976 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.235 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:17:27.235 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:17:27.802 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.802 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:27.802 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.802 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.802 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.802 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.802 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.802 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:27.802 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:28.060 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:28.060 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.060 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:28.060 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:28.060 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:28.060 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.060 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.060 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.060 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.060 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.060 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.060 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.060 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.627 00:17:28.627 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.627 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.627 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.886 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.886 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.886 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.886 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.886 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.886 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.886 { 00:17:28.886 "cntlid": 137, 00:17:28.886 "qid": 0, 00:17:28.886 "state": "enabled", 00:17:28.886 "thread": "nvmf_tgt_poll_group_000", 00:17:28.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:28.886 "listen_address": { 00:17:28.886 "trtype": "TCP", 00:17:28.886 "adrfam": "IPv4", 00:17:28.886 "traddr": "10.0.0.2", 00:17:28.886 "trsvcid": "4420" 00:17:28.886 }, 00:17:28.886 "peer_address": { 00:17:28.886 "trtype": "TCP", 00:17:28.886 "adrfam": "IPv4", 00:17:28.886 "traddr": "10.0.0.1", 00:17:28.886 "trsvcid": "41350" 00:17:28.886 }, 00:17:28.886 "auth": { 00:17:28.886 "state": "completed", 00:17:28.886 "digest": "sha512", 00:17:28.886 "dhgroup": "ffdhe8192" 00:17:28.886 } 00:17:28.886 } 00:17:28.886 ]' 00:17:28.886 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.886 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.886 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.886 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:28.886 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.886 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.886 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.887 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.145 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:17:29.145 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:17:29.713 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.713 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.713 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.713 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.713 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.713 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.713 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:29.713 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:29.971 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:29.971 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.971 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:29.971 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:29.971 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:29.971 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.971 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.971 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.971 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.971 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.971 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.971 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.971 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.678 00:17:30.678 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.678 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.678 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.678 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.678 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.678 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.678 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.678 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.678 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.678 { 00:17:30.678 "cntlid": 139, 00:17:30.678 "qid": 0, 00:17:30.678 "state": "enabled", 00:17:30.678 "thread": "nvmf_tgt_poll_group_000", 00:17:30.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:30.678 "listen_address": { 00:17:30.678 "trtype": "TCP", 00:17:30.678 "adrfam": "IPv4", 00:17:30.678 "traddr": "10.0.0.2", 00:17:30.678 "trsvcid": "4420" 00:17:30.678 }, 00:17:30.678 "peer_address": { 00:17:30.678 "trtype": "TCP", 00:17:30.678 "adrfam": "IPv4", 00:17:30.678 "traddr": "10.0.0.1", 00:17:30.678 "trsvcid": "41364" 00:17:30.678 }, 00:17:30.678 "auth": { 00:17:30.678 "state": "completed", 00:17:30.678 "digest": "sha512", 00:17:30.678 "dhgroup": "ffdhe8192" 00:17:30.678 } 00:17:30.678 } 00:17:30.678 ]' 00:17:30.678 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.678 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.678 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.678 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:30.678 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.678 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.678 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.678 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.970 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:17:30.970 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: --dhchap-ctrl-secret DHHC-1:02:MzVkNmU2MTc3YzA4N2QyMDQ1ZWRjN2U5YWFmNjhiNzMzZDZhNmI2YmY0ZWRkYmExQ2uXoQ==: 00:17:31.556 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.556 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.556 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.556 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.556 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.556 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.556 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:31.556 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:31.815 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:31.815 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.815 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:31.815 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:31.815 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:31.815 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.815 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.815 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.815 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.815 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.815 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.815 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.815 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.383 00:17:32.383 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.383 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.383 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.383 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.383 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.383 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.383 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.383 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.383 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.383 { 00:17:32.383 "cntlid": 141, 00:17:32.383 "qid": 0, 00:17:32.383 "state": "enabled", 00:17:32.383 "thread": "nvmf_tgt_poll_group_000", 00:17:32.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:32.383 "listen_address": { 00:17:32.383 "trtype": "TCP", 00:17:32.383 "adrfam": "IPv4", 00:17:32.383 "traddr": "10.0.0.2", 00:17:32.383 "trsvcid": "4420" 00:17:32.383 }, 00:17:32.383 "peer_address": { 00:17:32.383 "trtype": "TCP", 00:17:32.383 "adrfam": "IPv4", 00:17:32.383 "traddr": "10.0.0.1", 00:17:32.383 "trsvcid": "44222" 00:17:32.383 }, 00:17:32.383 "auth": { 00:17:32.383 "state": "completed", 00:17:32.383 "digest": "sha512", 00:17:32.383 "dhgroup": "ffdhe8192" 00:17:32.383 } 00:17:32.383 } 00:17:32.383 ]' 00:17:32.383 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.642 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.642 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.642 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:32.642 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.642 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.642 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.642 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.901 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:17:32.901 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:01:ZmRkZTI0ZWFkZjU0OWUyN2U0ZmE4NjI2Mzk1ZTIzNmFINHpP: 00:17:33.469 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.469 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:33.469 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.469 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.469 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.469 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.469 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:33.469 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:33.469 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:33.469 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.469 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:33.469 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:33.469 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:33.469 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.469 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:33.469 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.469 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.728 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.728 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:33.728 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.728 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.986 00:17:33.986 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.986 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.986 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.244 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.244 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.244 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.244 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.244 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.244 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.244 { 00:17:34.244 "cntlid": 143, 00:17:34.244 "qid": 0, 00:17:34.244 "state": "enabled", 00:17:34.244 "thread": "nvmf_tgt_poll_group_000", 00:17:34.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:34.244 "listen_address": { 00:17:34.244 "trtype": "TCP", 00:17:34.244 "adrfam": "IPv4", 00:17:34.244 "traddr": "10.0.0.2", 00:17:34.244 "trsvcid": "4420" 00:17:34.244 }, 00:17:34.244 "peer_address": { 00:17:34.244 "trtype": "TCP", 00:17:34.244 "adrfam": "IPv4", 00:17:34.244 "traddr": "10.0.0.1", 00:17:34.244 "trsvcid": "44240" 00:17:34.244 }, 00:17:34.244 "auth": { 00:17:34.244 "state": "completed", 00:17:34.244 "digest": "sha512", 00:17:34.244 "dhgroup": "ffdhe8192" 00:17:34.244 } 00:17:34.244 } 00:17:34.244 ]' 00:17:34.244 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.244 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.245 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.503 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:34.503 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.503 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.503 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.503 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.503 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:17:34.503 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:17:35.072 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.072 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:35.072 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.072 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.331 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.331 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:35.331 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:35.331 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:35.331 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:35.331 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:35.331 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:35.331 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:35.331 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.331 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:35.331 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:35.331 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:35.331 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.331 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.331 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.331 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.331 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.331 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.331 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.331 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.900 00:17:35.900 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.900 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.900 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.165 { 00:17:36.165 "cntlid": 145, 00:17:36.165 "qid": 0, 00:17:36.165 "state": "enabled", 00:17:36.165 "thread": "nvmf_tgt_poll_group_000", 00:17:36.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:36.165 "listen_address": { 00:17:36.165 "trtype": "TCP", 00:17:36.165 "adrfam": "IPv4", 00:17:36.165 "traddr": "10.0.0.2", 00:17:36.165 "trsvcid": "4420" 00:17:36.165 }, 00:17:36.165 "peer_address": { 00:17:36.165 "trtype": "TCP", 00:17:36.165 "adrfam": "IPv4", 00:17:36.165 "traddr": "10.0.0.1", 00:17:36.165 "trsvcid": "44258" 00:17:36.165 }, 00:17:36.165 "auth": { 00:17:36.165 "state": "completed", 00:17:36.165 "digest": "sha512", 00:17:36.165 "dhgroup": "ffdhe8192" 00:17:36.165 } 00:17:36.165 } 00:17:36.165 ]' 00:17:36.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:36.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.423 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:17:36.423 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTVlYmRlNjhmYzBjOGQ3YzFmMmQ1ZGM5OTliZDE5NWUyOGZlOGQyYThjYjY4YTE0R/qDaQ==: --dhchap-ctrl-secret DHHC-1:03:MWY5NGQ5Mjk3MmY1NjA1MzJiZjZmZGQzNjQ1ZDg3YmVjZDkwMDQzOWVhNTYxYTY2OGI3MjQ5YzQyNDlkZjgwNOFrMrM=: 00:17:36.990 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.990 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:36.990 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.990 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.990 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.990 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:36.990 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.990 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.990 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.990 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:36.990 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:36.990 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:36.990 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:36.990 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:36.990 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:36.990 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:36.991 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:36.991 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:36.991 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:37.558 request: 00:17:37.558 { 00:17:37.558 "name": "nvme0", 00:17:37.558 "trtype": "tcp", 00:17:37.558 "traddr": "10.0.0.2", 00:17:37.558 "adrfam": "ipv4", 00:17:37.558 "trsvcid": "4420", 00:17:37.558 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:37.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:37.558 "prchk_reftag": false, 00:17:37.558 "prchk_guard": false, 00:17:37.558 "hdgst": false, 00:17:37.558 "ddgst": false, 00:17:37.558 "dhchap_key": "key2", 00:17:37.558 "allow_unrecognized_csi": false, 00:17:37.558 "method": "bdev_nvme_attach_controller", 00:17:37.558 "req_id": 1 00:17:37.558 } 00:17:37.558 Got JSON-RPC error response 00:17:37.558 response: 00:17:37.558 { 00:17:37.558 "code": -5, 00:17:37.558 "message": "Input/output error" 00:17:37.558 } 00:17:37.558 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:37.558 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:37.558 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:37.558 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:37.558 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.558 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.558 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.558 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.558 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.559 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.559 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.559 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.559 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:37.559 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:37.559 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:37.559 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:37.559 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.559 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:37.559 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.559 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:37.559 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:37.559 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:38.127 request: 00:17:38.127 { 00:17:38.127 "name": "nvme0", 00:17:38.127 "trtype": "tcp", 00:17:38.127 "traddr": "10.0.0.2", 00:17:38.127 "adrfam": "ipv4", 00:17:38.127 "trsvcid": "4420", 00:17:38.127 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:38.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:38.127 "prchk_reftag": false, 00:17:38.127 "prchk_guard": false, 00:17:38.127 "hdgst": false, 00:17:38.127 "ddgst": false, 00:17:38.127 "dhchap_key": "key1", 00:17:38.127 "dhchap_ctrlr_key": "ckey2", 00:17:38.127 "allow_unrecognized_csi": false, 00:17:38.127 "method": "bdev_nvme_attach_controller", 00:17:38.127 "req_id": 1 00:17:38.127 } 00:17:38.127 Got JSON-RPC error response 00:17:38.127 response: 00:17:38.127 { 00:17:38.127 "code": -5, 00:17:38.127 "message": "Input/output error" 00:17:38.127 } 00:17:38.127 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:38.127 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:38.127 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:38.127 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:38.127 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.127 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.127 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.127 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.127 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:38.127 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.127 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.127 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.127 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.127 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:38.127 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.127 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:38.127 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.127 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:38.127 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.127 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.127 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.128 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.387 request: 00:17:38.387 { 00:17:38.387 "name": "nvme0", 00:17:38.387 "trtype": "tcp", 00:17:38.387 "traddr": "10.0.0.2", 00:17:38.387 "adrfam": "ipv4", 00:17:38.387 "trsvcid": "4420", 00:17:38.387 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:38.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:38.387 "prchk_reftag": false, 00:17:38.387 "prchk_guard": false, 00:17:38.387 "hdgst": false, 00:17:38.387 "ddgst": false, 00:17:38.387 "dhchap_key": "key1", 00:17:38.387 "dhchap_ctrlr_key": "ckey1", 00:17:38.387 "allow_unrecognized_csi": false, 00:17:38.387 "method": "bdev_nvme_attach_controller", 00:17:38.387 "req_id": 1 00:17:38.387 } 00:17:38.387 Got JSON-RPC error response 00:17:38.387 response: 00:17:38.387 { 00:17:38.387 "code": -5, 00:17:38.387 "message": "Input/output error" 00:17:38.387 } 00:17:38.387 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:38.387 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:38.387 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:38.387 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:38.387 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.387 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.387 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.387 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.387 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2823924 00:17:38.387 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2823924 ']' 00:17:38.387 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2823924 00:17:38.387 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:38.387 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:38.387 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2823924 00:17:38.647 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:38.647 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:38.647 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2823924' 00:17:38.647 killing process with pid 2823924 00:17:38.647 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2823924 00:17:38.647 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2823924 00:17:38.647 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:38.647 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:38.647 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:38.647 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.647 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2846169 00:17:38.647 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:38.647 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2846169 00:17:38.647 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2846169 ']' 00:17:38.647 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.647 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.647 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.647 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.647 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.906 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.906 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:38.906 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:38.906 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:38.906 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.906 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.906 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:38.906 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2846169 00:17:38.906 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2846169 ']' 00:17:38.906 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.906 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.906 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.906 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.906 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.165 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.165 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:39.165 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:39.165 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.165 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.165 null0 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.d2a 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Hl9 ]] 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Hl9 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.uKF 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.XTF ]] 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XTF 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.sfc 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.8E9 ]] 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8E9 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.m7G 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:39.424 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:39.425 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:39.992 nvme0n1 00:17:40.251 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.251 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.252 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.252 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.252 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.252 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.252 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.252 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.252 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.252 { 00:17:40.252 "cntlid": 1, 00:17:40.252 "qid": 0, 00:17:40.252 "state": "enabled", 00:17:40.252 "thread": "nvmf_tgt_poll_group_000", 00:17:40.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:40.252 "listen_address": { 00:17:40.252 "trtype": "TCP", 00:17:40.252 "adrfam": "IPv4", 00:17:40.252 "traddr": "10.0.0.2", 00:17:40.252 "trsvcid": "4420" 00:17:40.252 }, 00:17:40.252 "peer_address": { 00:17:40.252 "trtype": "TCP", 00:17:40.252 "adrfam": "IPv4", 00:17:40.252 "traddr": "10.0.0.1", 00:17:40.252 "trsvcid": "44308" 00:17:40.252 }, 00:17:40.252 "auth": { 00:17:40.252 "state": "completed", 00:17:40.252 "digest": "sha512", 00:17:40.252 "dhgroup": "ffdhe8192" 00:17:40.252 } 00:17:40.252 } 00:17:40.252 ]' 00:17:40.252 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.511 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.511 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.511 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:40.511 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.511 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.511 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.511 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.770 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:17:40.770 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:17:41.338 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.338 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:41.338 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.338 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.338 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.338 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:41.338 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.338 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.338 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.338 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:41.338 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:41.597 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:41.597 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:41.597 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:41.597 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:41.597 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.597 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:41.597 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.597 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:41.597 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.597 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.597 request: 00:17:41.597 { 00:17:41.597 "name": "nvme0", 00:17:41.597 "trtype": "tcp", 00:17:41.597 "traddr": "10.0.0.2", 00:17:41.597 "adrfam": "ipv4", 00:17:41.597 "trsvcid": "4420", 00:17:41.597 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:41.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:41.597 "prchk_reftag": false, 00:17:41.597 "prchk_guard": false, 00:17:41.597 "hdgst": false, 00:17:41.597 "ddgst": false, 00:17:41.597 "dhchap_key": "key3", 00:17:41.597 "allow_unrecognized_csi": false, 00:17:41.597 "method": "bdev_nvme_attach_controller", 00:17:41.597 "req_id": 1 00:17:41.597 } 00:17:41.597 Got JSON-RPC error response 00:17:41.597 response: 00:17:41.597 { 00:17:41.597 "code": -5, 00:17:41.597 "message": "Input/output error" 00:17:41.597 } 00:17:41.597 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:41.597 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:41.597 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:41.597 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:41.597 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:41.597 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:41.597 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:41.597 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:41.855 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:41.855 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:41.855 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:41.855 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:41.855 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.855 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:41.855 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.855 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:41.855 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.855 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.114 request: 00:17:42.114 { 00:17:42.114 "name": "nvme0", 00:17:42.114 "trtype": "tcp", 00:17:42.114 "traddr": "10.0.0.2", 00:17:42.114 "adrfam": "ipv4", 00:17:42.114 "trsvcid": "4420", 00:17:42.114 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:42.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:42.114 "prchk_reftag": false, 00:17:42.114 "prchk_guard": false, 00:17:42.114 "hdgst": false, 00:17:42.114 "ddgst": false, 00:17:42.114 "dhchap_key": "key3", 00:17:42.114 "allow_unrecognized_csi": false, 00:17:42.114 "method": "bdev_nvme_attach_controller", 00:17:42.114 "req_id": 1 00:17:42.114 } 00:17:42.114 Got JSON-RPC error response 00:17:42.114 response: 00:17:42.114 { 00:17:42.114 "code": -5, 00:17:42.114 "message": "Input/output error" 00:17:42.114 } 00:17:42.114 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:42.114 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:42.114 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:42.114 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:42.114 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:42.114 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:42.114 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:42.115 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:42.115 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:42.115 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:42.373 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:42.374 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.374 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.374 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.374 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:42.374 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.374 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.374 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.374 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:42.374 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:42.374 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:42.374 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:42.374 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:42.374 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:42.374 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:42.374 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:42.374 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:42.374 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:42.632 request: 00:17:42.632 { 00:17:42.632 "name": "nvme0", 00:17:42.632 "trtype": "tcp", 00:17:42.632 "traddr": "10.0.0.2", 00:17:42.632 "adrfam": "ipv4", 00:17:42.632 "trsvcid": "4420", 00:17:42.632 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:42.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:42.632 "prchk_reftag": false, 00:17:42.632 "prchk_guard": false, 00:17:42.632 "hdgst": false, 00:17:42.632 "ddgst": false, 00:17:42.632 "dhchap_key": "key0", 00:17:42.632 "dhchap_ctrlr_key": "key1", 00:17:42.632 "allow_unrecognized_csi": false, 00:17:42.632 "method": "bdev_nvme_attach_controller", 00:17:42.632 "req_id": 1 00:17:42.632 } 00:17:42.632 Got JSON-RPC error response 00:17:42.632 response: 00:17:42.632 { 00:17:42.632 "code": -5, 00:17:42.632 "message": "Input/output error" 00:17:42.632 } 00:17:42.632 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:42.632 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:42.632 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:42.632 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:42.632 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:42.632 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:42.632 13:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:42.890 nvme0n1 00:17:42.890 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:42.890 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:42.890 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.148 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.148 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.148 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.407 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:43.407 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.407 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.407 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.407 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:43.407 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:43.407 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:43.974 nvme0n1 00:17:44.241 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:44.241 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:44.241 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.241 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.241 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:44.241 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.241 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.241 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.241 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:44.241 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:44.241 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.499 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.499 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:17:44.499 13:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: --dhchap-ctrl-secret DHHC-1:03:ZGYyMTU5NGE5MzBhNzczNmI2OTk4Yzk0ZGFkYWJiYTdkNzg5ZmE1NDM3OTc5NjFiMTM0ZmRkZTRkODg2MzM1MqD9tNE=: 00:17:45.068 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:45.068 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:45.068 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:45.068 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:45.068 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:45.068 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:45.068 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:45.068 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.068 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.327 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:45.327 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:45.327 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:45.327 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:45.327 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:45.327 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:45.327 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:45.327 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:45.327 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:45.327 13:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:45.895 request: 00:17:45.895 { 00:17:45.895 "name": "nvme0", 00:17:45.895 "trtype": "tcp", 00:17:45.895 "traddr": "10.0.0.2", 00:17:45.895 "adrfam": "ipv4", 00:17:45.895 "trsvcid": "4420", 00:17:45.895 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:45.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:45.895 "prchk_reftag": false, 00:17:45.895 "prchk_guard": false, 00:17:45.895 "hdgst": false, 00:17:45.895 "ddgst": false, 00:17:45.895 "dhchap_key": "key1", 00:17:45.895 "allow_unrecognized_csi": false, 00:17:45.895 "method": "bdev_nvme_attach_controller", 00:17:45.895 "req_id": 1 00:17:45.895 } 00:17:45.895 Got JSON-RPC error response 00:17:45.895 response: 00:17:45.895 { 00:17:45.895 "code": -5, 00:17:45.895 "message": "Input/output error" 00:17:45.895 } 00:17:45.895 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:45.895 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:45.895 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:45.895 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:45.895 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:45.895 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:45.895 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:46.461 nvme0n1 00:17:46.461 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:46.461 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.461 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:46.721 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.721 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.721 13:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.979 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:46.979 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.979 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.979 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.979 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:46.979 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:46.979 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:47.238 nvme0n1 00:17:47.238 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:47.238 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:47.238 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.498 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.498 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.498 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.498 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:47.498 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.498 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.498 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.498 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: '' 2s 00:17:47.498 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:47.498 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:47.498 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: 00:17:47.498 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:47.498 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:47.498 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:47.498 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: ]] 00:17:47.498 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MTNhYWY2NmRkZGRlYWM1MWJjYWQ0YWUzOGM5YWZhZDOJERMi: 00:17:47.498 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:47.498 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:47.498 13:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:50.031 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:50.031 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:50.031 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:50.031 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:50.031 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:50.031 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:50.031 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:50.031 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:50.031 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.031 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.031 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.031 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: 2s 00:17:50.031 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:50.031 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:50.031 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:50.031 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: 00:17:50.031 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:50.031 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:50.031 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:50.031 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: ]] 00:17:50.031 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ODk0OGY2NTI4ZTNkMGRkOTIzYjRlM2I1NDBjNmQ5ZDdlZDc5ZThkMGI0ZDc5MTll2W2qeg==: 00:17:50.031 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:50.031 13:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:51.936 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:51.936 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:51.936 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:51.936 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:51.936 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:51.936 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:51.936 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:51.936 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.936 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:51.936 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.936 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.936 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.936 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:51.936 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:51.936 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:52.504 nvme0n1 00:17:52.504 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:52.504 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.504 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.504 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.504 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:52.504 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:53.072 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:53.072 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.072 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:53.072 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.072 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:53.072 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.072 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.072 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.072 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:53.072 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:53.331 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:53.331 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:53.331 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.590 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.590 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:53.590 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.590 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.590 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.590 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:53.590 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:53.590 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:53.590 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:53.590 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:53.590 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:53.590 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:53.590 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:53.590 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:54.158 request: 00:17:54.158 { 00:17:54.158 "name": "nvme0", 00:17:54.158 "dhchap_key": "key1", 00:17:54.158 "dhchap_ctrlr_key": "key3", 00:17:54.158 "method": "bdev_nvme_set_keys", 00:17:54.158 "req_id": 1 00:17:54.158 } 00:17:54.158 Got JSON-RPC error response 00:17:54.158 response: 00:17:54.158 { 00:17:54.158 "code": -13, 00:17:54.159 "message": "Permission denied" 00:17:54.159 } 00:17:54.159 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:54.159 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:54.159 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:54.159 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:54.159 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:54.159 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:54.159 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.159 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:54.159 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:55.536 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:55.536 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:55.536 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.536 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:55.536 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:55.536 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.536 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.536 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.536 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:55.536 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:55.536 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:56.104 nvme0n1 00:17:56.104 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:56.104 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.104 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.104 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.104 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:56.104 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:56.104 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:56.104 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:56.104 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:56.104 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:56.104 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:56.104 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:56.104 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:56.670 request: 00:17:56.670 { 00:17:56.670 "name": "nvme0", 00:17:56.670 "dhchap_key": "key2", 00:17:56.670 "dhchap_ctrlr_key": "key0", 00:17:56.670 "method": "bdev_nvme_set_keys", 00:17:56.670 "req_id": 1 00:17:56.670 } 00:17:56.670 Got JSON-RPC error response 00:17:56.670 response: 00:17:56.670 { 00:17:56.670 "code": -13, 00:17:56.670 "message": "Permission denied" 00:17:56.670 } 00:17:56.670 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:56.670 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:56.670 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:56.670 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:56.670 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:56.670 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:56.670 13:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.928 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:56.928 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:57.864 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:57.864 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:57.864 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.124 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:58.124 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:58.124 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:58.124 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2823948 00:17:58.124 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2823948 ']' 00:17:58.124 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2823948 00:17:58.124 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:58.124 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.124 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2823948 00:17:58.124 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:58.124 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:58.124 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2823948' 00:17:58.124 killing process with pid 2823948 00:17:58.124 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2823948 00:17:58.124 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2823948 00:17:58.383 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:58.383 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:58.383 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:58.383 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:58.384 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:58.384 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:58.384 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:58.384 rmmod nvme_tcp 00:17:58.384 rmmod nvme_fabrics 00:17:58.384 rmmod nvme_keyring 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2846169 ']' 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2846169 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2846169 ']' 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2846169 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2846169 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2846169' 00:17:58.644 killing process with pid 2846169 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2846169 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2846169 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:58.644 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.d2a /tmp/spdk.key-sha256.uKF /tmp/spdk.key-sha384.sfc /tmp/spdk.key-sha512.m7G /tmp/spdk.key-sha512.Hl9 /tmp/spdk.key-sha384.XTF /tmp/spdk.key-sha256.8E9 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:01.182 00:18:01.182 real 2m33.658s 00:18:01.182 user 5m54.218s 00:18:01.182 sys 0m24.164s 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.182 ************************************ 00:18:01.182 END TEST nvmf_auth_target 00:18:01.182 ************************************ 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:01.182 ************************************ 00:18:01.182 START TEST nvmf_bdevio_no_huge 00:18:01.182 ************************************ 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:01.182 * Looking for test storage... 00:18:01.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:01.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.182 --rc genhtml_branch_coverage=1 00:18:01.182 --rc genhtml_function_coverage=1 00:18:01.182 --rc genhtml_legend=1 00:18:01.182 --rc geninfo_all_blocks=1 00:18:01.182 --rc geninfo_unexecuted_blocks=1 00:18:01.182 00:18:01.182 ' 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:01.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.182 --rc genhtml_branch_coverage=1 00:18:01.182 --rc genhtml_function_coverage=1 00:18:01.182 --rc genhtml_legend=1 00:18:01.182 --rc geninfo_all_blocks=1 00:18:01.182 --rc geninfo_unexecuted_blocks=1 00:18:01.182 00:18:01.182 ' 00:18:01.182 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:01.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.182 --rc genhtml_branch_coverage=1 00:18:01.183 --rc genhtml_function_coverage=1 00:18:01.183 --rc genhtml_legend=1 00:18:01.183 --rc geninfo_all_blocks=1 00:18:01.183 --rc geninfo_unexecuted_blocks=1 00:18:01.183 00:18:01.183 ' 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:01.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.183 --rc genhtml_branch_coverage=1 00:18:01.183 --rc genhtml_function_coverage=1 00:18:01.183 --rc genhtml_legend=1 00:18:01.183 --rc geninfo_all_blocks=1 00:18:01.183 --rc geninfo_unexecuted_blocks=1 00:18:01.183 00:18:01.183 ' 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:01.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:01.183 13:10:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:07.758 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:07.758 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:07.758 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:07.758 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:07.758 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:07.758 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:07.758 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:07.758 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:07.758 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:07.758 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:07.758 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:07.758 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:07.758 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:07.758 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:07.758 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:07.759 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:07.759 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:07.759 Found net devices under 0000:86:00.0: cvl_0_0 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:07.759 Found net devices under 0000:86:00.1: cvl_0_1 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:07.759 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:07.759 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:07.759 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:07.759 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:07.759 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:07.759 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:07.759 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:07.759 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:07.759 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:07.759 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:07.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:07.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:18:07.759 00:18:07.759 --- 10.0.0.2 ping statistics --- 00:18:07.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.759 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:18:07.759 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:07.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:07.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:18:07.759 00:18:07.759 --- 10.0.0.1 ping statistics --- 00:18:07.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.759 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:18:07.759 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:07.759 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:07.759 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:07.759 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:07.759 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:07.759 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:07.759 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:07.759 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:07.759 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:07.759 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:07.759 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:07.759 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2853059 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2853059 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2853059 ']' 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:07.760 [2024-11-19 13:10:10.331995] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:07.760 [2024-11-19 13:10:10.332042] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:07.760 [2024-11-19 13:10:10.417668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:07.760 [2024-11-19 13:10:10.465781] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.760 [2024-11-19 13:10:10.465815] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.760 [2024-11-19 13:10:10.465821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.760 [2024-11-19 13:10:10.465828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.760 [2024-11-19 13:10:10.465833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.760 [2024-11-19 13:10:10.466945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:07.760 [2024-11-19 13:10:10.466974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:07.760 [2024-11-19 13:10:10.467081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:07.760 [2024-11-19 13:10:10.467082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:07.760 [2024-11-19 13:10:10.624196] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:07.760 Malloc0 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:07.760 [2024-11-19 13:10:10.668478] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:07.760 { 00:18:07.760 "params": { 00:18:07.760 "name": "Nvme$subsystem", 00:18:07.760 "trtype": "$TEST_TRANSPORT", 00:18:07.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:07.760 "adrfam": "ipv4", 00:18:07.760 "trsvcid": "$NVMF_PORT", 00:18:07.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:07.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:07.760 "hdgst": ${hdgst:-false}, 00:18:07.760 "ddgst": ${ddgst:-false} 00:18:07.760 }, 00:18:07.760 "method": "bdev_nvme_attach_controller" 00:18:07.760 } 00:18:07.760 EOF 00:18:07.760 )") 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:07.760 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:07.760 "params": { 00:18:07.760 "name": "Nvme1", 00:18:07.760 "trtype": "tcp", 00:18:07.760 "traddr": "10.0.0.2", 00:18:07.760 "adrfam": "ipv4", 00:18:07.760 "trsvcid": "4420", 00:18:07.760 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.760 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:07.760 "hdgst": false, 00:18:07.760 "ddgst": false 00:18:07.760 }, 00:18:07.760 "method": "bdev_nvme_attach_controller" 00:18:07.760 }' 00:18:07.760 [2024-11-19 13:10:10.720998] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:07.760 [2024-11-19 13:10:10.721044] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2853294 ] 00:18:07.760 [2024-11-19 13:10:10.801258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:07.760 [2024-11-19 13:10:10.850307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.760 [2024-11-19 13:10:10.850414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.760 [2024-11-19 13:10:10.850415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.760 I/O targets: 00:18:07.760 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:07.760 00:18:07.760 00:18:07.760 CUnit - A unit testing framework for C - Version 2.1-3 00:18:07.760 http://cunit.sourceforge.net/ 00:18:07.760 00:18:07.760 00:18:07.760 Suite: bdevio tests on: Nvme1n1 00:18:07.760 Test: blockdev write read block ...passed 00:18:08.019 Test: blockdev write zeroes read block ...passed 00:18:08.019 Test: blockdev write zeroes read no split ...passed 00:18:08.019 Test: blockdev write zeroes read split ...passed 00:18:08.019 Test: blockdev write zeroes read split partial ...passed 00:18:08.019 Test: blockdev reset ...[2024-11-19 13:10:11.181760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:08.019 [2024-11-19 13:10:11.181829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd1e920 (9): Bad file descriptor 00:18:08.019 [2024-11-19 13:10:11.236432] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:08.019 passed 00:18:08.019 Test: blockdev write read 8 blocks ...passed 00:18:08.019 Test: blockdev write read size > 128k ...passed 00:18:08.019 Test: blockdev write read invalid size ...passed 00:18:08.019 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:08.019 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:08.019 Test: blockdev write read max offset ...passed 00:18:08.278 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:08.279 Test: blockdev writev readv 8 blocks ...passed 00:18:08.279 Test: blockdev writev readv 30 x 1block ...passed 00:18:08.279 Test: blockdev writev readv block ...passed 00:18:08.279 Test: blockdev writev readv size > 128k ...passed 00:18:08.279 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:08.279 Test: blockdev comparev and writev ...[2024-11-19 13:10:11.445699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:08.279 [2024-11-19 13:10:11.445727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.279 [2024-11-19 13:10:11.445742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:08.279 [2024-11-19 13:10:11.445749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.279 [2024-11-19 13:10:11.445990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:08.279 [2024-11-19 13:10:11.446000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:08.279 [2024-11-19 13:10:11.446012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:08.279 [2024-11-19 13:10:11.446020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:08.279 [2024-11-19 13:10:11.446262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:08.279 [2024-11-19 13:10:11.446271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:08.279 [2024-11-19 13:10:11.446283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:08.279 [2024-11-19 13:10:11.446290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:08.279 [2024-11-19 13:10:11.446510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:08.279 [2024-11-19 13:10:11.446520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:08.279 [2024-11-19 13:10:11.446531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:08.279 [2024-11-19 13:10:11.446538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:08.279 passed 00:18:08.279 Test: blockdev nvme passthru rw ...passed 00:18:08.279 Test: blockdev nvme passthru vendor specific ...[2024-11-19 13:10:11.530315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:08.279 [2024-11-19 13:10:11.530333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:08.279 [2024-11-19 13:10:11.530442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:08.279 [2024-11-19 13:10:11.530458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:08.279 [2024-11-19 13:10:11.530565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:08.279 [2024-11-19 13:10:11.530574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:08.279 [2024-11-19 13:10:11.530682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:08.279 [2024-11-19 13:10:11.530691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:08.279 passed 00:18:08.279 Test: blockdev nvme admin passthru ...passed 00:18:08.279 Test: blockdev copy ...passed 00:18:08.279 00:18:08.279 Run Summary: Type Total Ran Passed Failed Inactive 00:18:08.279 suites 1 1 n/a 0 0 00:18:08.279 tests 23 23 23 0 0 00:18:08.279 asserts 152 152 152 0 n/a 00:18:08.279 00:18:08.279 Elapsed time = 1.061 seconds 00:18:08.538 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:08.538 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.538 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:08.538 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.538 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:08.538 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:08.538 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:08.538 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:08.538 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:08.538 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:08.538 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:08.538 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:08.538 rmmod nvme_tcp 00:18:08.538 rmmod nvme_fabrics 00:18:08.538 rmmod nvme_keyring 00:18:08.538 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:08.797 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:08.797 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:08.797 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2853059 ']' 00:18:08.797 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2853059 00:18:08.797 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2853059 ']' 00:18:08.797 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2853059 00:18:08.797 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:08.797 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.797 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2853059 00:18:08.797 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:08.797 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:08.797 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2853059' 00:18:08.797 killing process with pid 2853059 00:18:08.797 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2853059 00:18:08.797 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2853059 00:18:09.056 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:09.056 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:09.056 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:09.056 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:09.056 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:09.056 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:09.056 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:09.056 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:09.056 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:09.056 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.056 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:09.056 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:11.594 00:18:11.594 real 0m10.222s 00:18:11.594 user 0m10.772s 00:18:11.594 sys 0m5.336s 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:11.594 ************************************ 00:18:11.594 END TEST nvmf_bdevio_no_huge 00:18:11.594 ************************************ 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:11.594 ************************************ 00:18:11.594 START TEST nvmf_tls 00:18:11.594 ************************************ 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:11.594 * Looking for test storage... 00:18:11.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:11.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.594 --rc genhtml_branch_coverage=1 00:18:11.594 --rc genhtml_function_coverage=1 00:18:11.594 --rc genhtml_legend=1 00:18:11.594 --rc geninfo_all_blocks=1 00:18:11.594 --rc geninfo_unexecuted_blocks=1 00:18:11.594 00:18:11.594 ' 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:11.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.594 --rc genhtml_branch_coverage=1 00:18:11.594 --rc genhtml_function_coverage=1 00:18:11.594 --rc genhtml_legend=1 00:18:11.594 --rc geninfo_all_blocks=1 00:18:11.594 --rc geninfo_unexecuted_blocks=1 00:18:11.594 00:18:11.594 ' 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:11.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.594 --rc genhtml_branch_coverage=1 00:18:11.594 --rc genhtml_function_coverage=1 00:18:11.594 --rc genhtml_legend=1 00:18:11.594 --rc geninfo_all_blocks=1 00:18:11.594 --rc geninfo_unexecuted_blocks=1 00:18:11.594 00:18:11.594 ' 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:11.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.594 --rc genhtml_branch_coverage=1 00:18:11.594 --rc genhtml_function_coverage=1 00:18:11.594 --rc genhtml_legend=1 00:18:11.594 --rc geninfo_all_blocks=1 00:18:11.594 --rc geninfo_unexecuted_blocks=1 00:18:11.594 00:18:11.594 ' 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:11.594 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:11.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:11.595 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:18.185 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:18.185 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:18.185 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:18.186 Found net devices under 0000:86:00.0: cvl_0_0 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:18.186 Found net devices under 0000:86:00.1: cvl_0_1 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:18.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:18.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:18:18.186 00:18:18.186 --- 10.0.0.2 ping statistics --- 00:18:18.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.186 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:18.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:18.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:18:18.186 00:18:18.186 --- 10.0.0.1 ping statistics --- 00:18:18.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.186 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2857015 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2857015 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2857015 ']' 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.186 [2024-11-19 13:10:20.646906] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:18.186 [2024-11-19 13:10:20.646966] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.186 [2024-11-19 13:10:20.726457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.186 [2024-11-19 13:10:20.767124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.186 [2024-11-19 13:10:20.767158] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.186 [2024-11-19 13:10:20.767166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.186 [2024-11-19 13:10:20.767174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.186 [2024-11-19 13:10:20.767181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.186 [2024-11-19 13:10:20.767755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:18.186 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:18.186 true 00:18:18.186 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:18.186 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:18.187 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:18.187 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:18.187 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:18.187 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:18.187 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:18.446 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:18.446 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:18.446 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:18.446 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:18.446 13:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:18.705 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:18.705 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:18.705 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:18.705 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:18.963 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:18.964 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:18.964 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:19.223 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:19.223 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:19.223 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:19.223 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:19.223 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:19.483 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:19.483 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:19.742 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:19.742 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:19.742 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:19.742 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:19.742 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:19.742 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:19.742 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:19.742 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:19.742 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:19.742 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:19.742 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:19.742 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:19.742 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:19.742 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:19.742 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:19.742 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:19.742 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:19.742 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:19.742 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:19.742 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.CNYuxPAb46 00:18:19.742 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:19.742 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.3adyqRTYYc 00:18:19.742 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:19.742 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:19.742 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.CNYuxPAb46 00:18:19.742 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.3adyqRTYYc 00:18:19.742 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:20.001 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:20.261 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.CNYuxPAb46 00:18:20.261 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.CNYuxPAb46 00:18:20.261 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:20.520 [2024-11-19 13:10:23.689629] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.520 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:20.779 13:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:20.779 [2024-11-19 13:10:24.062596] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:20.779 [2024-11-19 13:10:24.062812] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.779 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:21.038 malloc0 00:18:21.038 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:21.297 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.CNYuxPAb46 00:18:21.297 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:21.556 13:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.CNYuxPAb46 00:18:33.767 Initializing NVMe Controllers 00:18:33.767 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:33.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:33.767 Initialization complete. Launching workers. 00:18:33.767 ======================================================== 00:18:33.767 Latency(us) 00:18:33.767 Device Information : IOPS MiB/s Average min max 00:18:33.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16398.64 64.06 3902.86 770.10 6131.20 00:18:33.767 ======================================================== 00:18:33.768 Total : 16398.64 64.06 3902.86 770.10 6131.20 00:18:33.768 00:18:33.768 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CNYuxPAb46 00:18:33.768 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:33.768 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:33.768 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:33.768 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.CNYuxPAb46 00:18:33.768 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:33.768 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2859400 00:18:33.768 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:33.768 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:33.768 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2859400 /var/tmp/bdevperf.sock 00:18:33.768 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2859400 ']' 00:18:33.768 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.768 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.768 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.768 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.768 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.768 [2024-11-19 13:10:34.994527] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:33.768 [2024-11-19 13:10:34.994576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2859400 ] 00:18:33.768 [2024-11-19 13:10:35.069224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.768 [2024-11-19 13:10:35.109417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.768 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.768 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:33.768 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CNYuxPAb46 00:18:33.768 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:33.768 [2024-11-19 13:10:35.576900] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:33.768 TLSTESTn1 00:18:33.768 13:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:33.768 Running I/O for 10 seconds... 00:18:34.705 5420.00 IOPS, 21.17 MiB/s [2024-11-19T12:10:39.018Z] 5384.50 IOPS, 21.03 MiB/s [2024-11-19T12:10:39.956Z] 5423.00 IOPS, 21.18 MiB/s [2024-11-19T12:10:40.892Z] 5461.75 IOPS, 21.33 MiB/s [2024-11-19T12:10:41.828Z] 5443.40 IOPS, 21.26 MiB/s [2024-11-19T12:10:43.205Z] 5439.17 IOPS, 21.25 MiB/s [2024-11-19T12:10:44.141Z] 5449.29 IOPS, 21.29 MiB/s [2024-11-19T12:10:45.076Z] 5450.75 IOPS, 21.29 MiB/s [2024-11-19T12:10:46.012Z] 5380.22 IOPS, 21.02 MiB/s [2024-11-19T12:10:46.012Z] 5345.90 IOPS, 20.88 MiB/s 00:18:42.635 Latency(us) 00:18:42.635 [2024-11-19T12:10:46.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.635 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:42.635 Verification LBA range: start 0x0 length 0x2000 00:18:42.635 TLSTESTn1 : 10.02 5347.61 20.89 0.00 0.00 23897.20 4872.46 31685.23 00:18:42.635 [2024-11-19T12:10:46.012Z] =================================================================================================================== 00:18:42.635 [2024-11-19T12:10:46.012Z] Total : 5347.61 20.89 0.00 0.00 23897.20 4872.46 31685.23 00:18:42.635 { 00:18:42.635 "results": [ 00:18:42.635 { 00:18:42.635 "job": "TLSTESTn1", 00:18:42.635 "core_mask": "0x4", 00:18:42.635 "workload": "verify", 00:18:42.635 "status": "finished", 00:18:42.635 "verify_range": { 00:18:42.635 "start": 0, 00:18:42.635 "length": 8192 00:18:42.635 }, 00:18:42.635 "queue_depth": 128, 00:18:42.635 "io_size": 4096, 00:18:42.635 "runtime": 10.020371, 00:18:42.635 "iops": 5347.606391020851, 00:18:42.635 "mibps": 20.8890874649252, 00:18:42.635 "io_failed": 0, 00:18:42.635 "io_timeout": 0, 00:18:42.635 "avg_latency_us": 23897.200652421387, 00:18:42.635 "min_latency_us": 4872.459130434782, 00:18:42.635 "max_latency_us": 31685.231304347824 00:18:42.635 } 00:18:42.635 ], 00:18:42.635 "core_count": 1 00:18:42.635 } 00:18:42.635 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:42.635 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2859400 00:18:42.635 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2859400 ']' 00:18:42.635 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2859400 00:18:42.635 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:42.635 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.635 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2859400 00:18:42.635 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:42.635 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:42.635 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2859400' 00:18:42.636 killing process with pid 2859400 00:18:42.636 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2859400 00:18:42.636 Received shutdown signal, test time was about 10.000000 seconds 00:18:42.636 00:18:42.636 Latency(us) 00:18:42.636 [2024-11-19T12:10:46.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.636 [2024-11-19T12:10:46.013Z] =================================================================================================================== 00:18:42.636 [2024-11-19T12:10:46.013Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:42.636 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2859400 00:18:42.895 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3adyqRTYYc 00:18:42.895 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:42.895 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3adyqRTYYc 00:18:42.895 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:42.895 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.895 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:42.895 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.895 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3adyqRTYYc 00:18:42.895 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:42.895 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:42.895 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:42.895 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3adyqRTYYc 00:18:42.895 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:42.895 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2861228 00:18:42.895 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:42.895 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:42.895 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2861228 /var/tmp/bdevperf.sock 00:18:42.895 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2861228 ']' 00:18:42.895 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:42.895 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.895 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:42.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:42.895 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.895 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.895 [2024-11-19 13:10:46.094009] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:42.895 [2024-11-19 13:10:46.094056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2861228 ] 00:18:42.895 [2024-11-19 13:10:46.168297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.895 [2024-11-19 13:10:46.208425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.154 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.154 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:43.154 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3adyqRTYYc 00:18:43.154 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:43.413 [2024-11-19 13:10:46.687915] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:43.413 [2024-11-19 13:10:46.695382] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:43.413 [2024-11-19 13:10:46.696307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1616170 (107): Transport endpoint is not connected 00:18:43.413 [2024-11-19 13:10:46.697301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1616170 (9): Bad file descriptor 00:18:43.413 [2024-11-19 13:10:46.698302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:43.413 [2024-11-19 13:10:46.698312] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:43.413 [2024-11-19 13:10:46.698319] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:43.413 [2024-11-19 13:10:46.698329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:43.413 request: 00:18:43.413 { 00:18:43.413 "name": "TLSTEST", 00:18:43.413 "trtype": "tcp", 00:18:43.413 "traddr": "10.0.0.2", 00:18:43.413 "adrfam": "ipv4", 00:18:43.413 "trsvcid": "4420", 00:18:43.413 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.413 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:43.413 "prchk_reftag": false, 00:18:43.413 "prchk_guard": false, 00:18:43.413 "hdgst": false, 00:18:43.413 "ddgst": false, 00:18:43.413 "psk": "key0", 00:18:43.413 "allow_unrecognized_csi": false, 00:18:43.413 "method": "bdev_nvme_attach_controller", 00:18:43.413 "req_id": 1 00:18:43.413 } 00:18:43.413 Got JSON-RPC error response 00:18:43.413 response: 00:18:43.413 { 00:18:43.413 "code": -5, 00:18:43.413 "message": "Input/output error" 00:18:43.413 } 00:18:43.413 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2861228 00:18:43.413 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2861228 ']' 00:18:43.413 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2861228 00:18:43.413 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:43.413 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.413 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2861228 00:18:43.413 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:43.413 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:43.413 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2861228' 00:18:43.413 killing process with pid 2861228 00:18:43.413 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2861228 00:18:43.413 Received shutdown signal, test time was about 10.000000 seconds 00:18:43.413 00:18:43.413 Latency(us) 00:18:43.413 [2024-11-19T12:10:46.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.413 [2024-11-19T12:10:46.790Z] =================================================================================================================== 00:18:43.413 [2024-11-19T12:10:46.790Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:43.413 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2861228 00:18:43.672 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:43.672 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:43.672 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:43.672 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:43.672 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:43.672 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.CNYuxPAb46 00:18:43.672 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:43.672 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.CNYuxPAb46 00:18:43.672 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:43.672 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:43.672 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:43.672 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:43.673 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.CNYuxPAb46 00:18:43.673 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:43.673 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:43.673 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:43.673 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.CNYuxPAb46 00:18:43.673 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:43.673 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2861252 00:18:43.673 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:43.673 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:43.673 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2861252 /var/tmp/bdevperf.sock 00:18:43.673 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2861252 ']' 00:18:43.673 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:43.673 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.673 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:43.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:43.673 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.673 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.673 [2024-11-19 13:10:46.963029] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:43.673 [2024-11-19 13:10:46.963076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2861252 ] 00:18:43.673 [2024-11-19 13:10:47.038634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.932 [2024-11-19 13:10:47.081786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.932 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.932 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:43.932 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CNYuxPAb46 00:18:44.191 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:44.191 [2024-11-19 13:10:47.528570] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:44.191 [2024-11-19 13:10:47.537375] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:44.191 [2024-11-19 13:10:47.537399] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:44.191 [2024-11-19 13:10:47.537423] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:44.191 [2024-11-19 13:10:47.537883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x92a170 (107): Transport endpoint is not connected 00:18:44.191 [2024-11-19 13:10:47.538877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x92a170 (9): Bad file descriptor 00:18:44.191 [2024-11-19 13:10:47.539879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:44.191 [2024-11-19 13:10:47.539888] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:44.191 [2024-11-19 13:10:47.539895] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:44.191 [2024-11-19 13:10:47.539905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:44.191 request: 00:18:44.191 { 00:18:44.191 "name": "TLSTEST", 00:18:44.191 "trtype": "tcp", 00:18:44.191 "traddr": "10.0.0.2", 00:18:44.191 "adrfam": "ipv4", 00:18:44.191 "trsvcid": "4420", 00:18:44.191 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.191 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:44.191 "prchk_reftag": false, 00:18:44.191 "prchk_guard": false, 00:18:44.191 "hdgst": false, 00:18:44.191 "ddgst": false, 00:18:44.191 "psk": "key0", 00:18:44.191 "allow_unrecognized_csi": false, 00:18:44.191 "method": "bdev_nvme_attach_controller", 00:18:44.191 "req_id": 1 00:18:44.191 } 00:18:44.191 Got JSON-RPC error response 00:18:44.191 response: 00:18:44.191 { 00:18:44.191 "code": -5, 00:18:44.191 "message": "Input/output error" 00:18:44.191 } 00:18:44.191 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2861252 00:18:44.191 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2861252 ']' 00:18:44.191 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2861252 00:18:44.191 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:44.191 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.191 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2861252 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2861252' 00:18:44.451 killing process with pid 2861252 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2861252 00:18:44.451 Received shutdown signal, test time was about 10.000000 seconds 00:18:44.451 00:18:44.451 Latency(us) 00:18:44.451 [2024-11-19T12:10:47.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.451 [2024-11-19T12:10:47.828Z] =================================================================================================================== 00:18:44.451 [2024-11-19T12:10:47.828Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2861252 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.CNYuxPAb46 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.CNYuxPAb46 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.CNYuxPAb46 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.CNYuxPAb46 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2861483 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2861483 /var/tmp/bdevperf.sock 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2861483 ']' 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.451 13:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.451 [2024-11-19 13:10:47.806995] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:44.451 [2024-11-19 13:10:47.807044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2861483 ] 00:18:44.710 [2024-11-19 13:10:47.883641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.711 [2024-11-19 13:10:47.921703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.711 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.711 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:44.711 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CNYuxPAb46 00:18:44.969 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:45.229 [2024-11-19 13:10:48.384825] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:45.229 [2024-11-19 13:10:48.389559] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:45.229 [2024-11-19 13:10:48.389581] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:45.229 [2024-11-19 13:10:48.389606] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:45.229 [2024-11-19 13:10:48.390175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x695170 (107): Transport endpoint is not connected 00:18:45.229 [2024-11-19 13:10:48.391167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x695170 (9): Bad file descriptor 00:18:45.229 [2024-11-19 13:10:48.392168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:45.229 [2024-11-19 13:10:48.392177] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:45.229 [2024-11-19 13:10:48.392184] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:45.229 [2024-11-19 13:10:48.392194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:45.229 request: 00:18:45.229 { 00:18:45.229 "name": "TLSTEST", 00:18:45.229 "trtype": "tcp", 00:18:45.229 "traddr": "10.0.0.2", 00:18:45.229 "adrfam": "ipv4", 00:18:45.229 "trsvcid": "4420", 00:18:45.229 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:45.229 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:45.229 "prchk_reftag": false, 00:18:45.229 "prchk_guard": false, 00:18:45.229 "hdgst": false, 00:18:45.229 "ddgst": false, 00:18:45.229 "psk": "key0", 00:18:45.229 "allow_unrecognized_csi": false, 00:18:45.229 "method": "bdev_nvme_attach_controller", 00:18:45.229 "req_id": 1 00:18:45.229 } 00:18:45.229 Got JSON-RPC error response 00:18:45.229 response: 00:18:45.229 { 00:18:45.229 "code": -5, 00:18:45.229 "message": "Input/output error" 00:18:45.229 } 00:18:45.229 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2861483 00:18:45.229 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2861483 ']' 00:18:45.229 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2861483 00:18:45.229 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:45.229 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.229 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2861483 00:18:45.229 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:45.229 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:45.229 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2861483' 00:18:45.229 killing process with pid 2861483 00:18:45.229 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2861483 00:18:45.229 Received shutdown signal, test time was about 10.000000 seconds 00:18:45.229 00:18:45.229 Latency(us) 00:18:45.229 [2024-11-19T12:10:48.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.229 [2024-11-19T12:10:48.606Z] =================================================================================================================== 00:18:45.229 [2024-11-19T12:10:48.606Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:45.229 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2861483 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2861617 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2861617 /var/tmp/bdevperf.sock 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2861617 ']' 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:45.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.489 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.489 [2024-11-19 13:10:48.666601] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:45.489 [2024-11-19 13:10:48.666652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2861617 ] 00:18:45.489 [2024-11-19 13:10:48.744174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.489 [2024-11-19 13:10:48.785491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.748 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.748 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:45.748 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:45.748 [2024-11-19 13:10:49.051194] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:45.748 [2024-11-19 13:10:49.051228] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:45.748 request: 00:18:45.748 { 00:18:45.748 "name": "key0", 00:18:45.748 "path": "", 00:18:45.748 "method": "keyring_file_add_key", 00:18:45.748 "req_id": 1 00:18:45.748 } 00:18:45.748 Got JSON-RPC error response 00:18:45.748 response: 00:18:45.748 { 00:18:45.748 "code": -1, 00:18:45.748 "message": "Operation not permitted" 00:18:45.748 } 00:18:45.748 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:46.013 [2024-11-19 13:10:49.247803] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:46.013 [2024-11-19 13:10:49.247834] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:46.013 request: 00:18:46.013 { 00:18:46.013 "name": "TLSTEST", 00:18:46.013 "trtype": "tcp", 00:18:46.013 "traddr": "10.0.0.2", 00:18:46.013 "adrfam": "ipv4", 00:18:46.013 "trsvcid": "4420", 00:18:46.013 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.013 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:46.013 "prchk_reftag": false, 00:18:46.013 "prchk_guard": false, 00:18:46.013 "hdgst": false, 00:18:46.013 "ddgst": false, 00:18:46.013 "psk": "key0", 00:18:46.013 "allow_unrecognized_csi": false, 00:18:46.013 "method": "bdev_nvme_attach_controller", 00:18:46.013 "req_id": 1 00:18:46.013 } 00:18:46.013 Got JSON-RPC error response 00:18:46.013 response: 00:18:46.013 { 00:18:46.013 "code": -126, 00:18:46.013 "message": "Required key not available" 00:18:46.013 } 00:18:46.013 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2861617 00:18:46.013 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2861617 ']' 00:18:46.013 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2861617 00:18:46.013 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:46.013 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.013 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2861617 00:18:46.013 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:46.013 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:46.013 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2861617' 00:18:46.013 killing process with pid 2861617 00:18:46.013 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2861617 00:18:46.013 Received shutdown signal, test time was about 10.000000 seconds 00:18:46.013 00:18:46.013 Latency(us) 00:18:46.013 [2024-11-19T12:10:49.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.013 [2024-11-19T12:10:49.390Z] =================================================================================================================== 00:18:46.013 [2024-11-19T12:10:49.390Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:46.013 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2861617 00:18:46.275 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:46.275 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:46.275 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:46.275 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:46.275 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:46.275 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2857015 00:18:46.275 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2857015 ']' 00:18:46.275 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2857015 00:18:46.275 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:46.275 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.275 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2857015 00:18:46.275 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:46.275 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:46.275 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2857015' 00:18:46.275 killing process with pid 2857015 00:18:46.275 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2857015 00:18:46.275 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2857015 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.H3r2RAXB84 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.H3r2RAXB84 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2861749 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2861749 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2861749 ']' 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.538 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.538 [2024-11-19 13:10:49.804594] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:46.538 [2024-11-19 13:10:49.804646] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.538 [2024-11-19 13:10:49.882918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.827 [2024-11-19 13:10:49.926923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.827 [2024-11-19 13:10:49.926964] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.827 [2024-11-19 13:10:49.926972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.827 [2024-11-19 13:10:49.926978] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.827 [2024-11-19 13:10:49.926983] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.827 [2024-11-19 13:10:49.927561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.827 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.827 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:46.827 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:46.827 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:46.827 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.827 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.827 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.H3r2RAXB84 00:18:46.827 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.H3r2RAXB84 00:18:46.827 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:47.126 [2024-11-19 13:10:50.244429] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.126 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:47.126 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:47.413 [2024-11-19 13:10:50.641437] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:47.413 [2024-11-19 13:10:50.641650] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.413 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:47.697 malloc0 00:18:47.697 13:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:47.956 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.H3r2RAXB84 00:18:47.956 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:48.215 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H3r2RAXB84 00:18:48.215 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:48.215 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:48.215 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:48.215 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.H3r2RAXB84 00:18:48.215 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:48.215 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2862165 00:18:48.215 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:48.215 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:48.215 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2862165 /var/tmp/bdevperf.sock 00:18:48.215 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2862165 ']' 00:18:48.215 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:48.215 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.215 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:48.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:48.215 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.215 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.215 [2024-11-19 13:10:51.524676] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:48.215 [2024-11-19 13:10:51.524729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2862165 ] 00:18:48.474 [2024-11-19 13:10:51.600380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.474 [2024-11-19 13:10:51.641547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.041 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.041 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:49.041 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.H3r2RAXB84 00:18:49.300 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:49.559 [2024-11-19 13:10:52.714209] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:49.559 TLSTESTn1 00:18:49.559 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:49.559 Running I/O for 10 seconds... 00:18:51.873 5382.00 IOPS, 21.02 MiB/s [2024-11-19T12:10:56.187Z] 5389.00 IOPS, 21.05 MiB/s [2024-11-19T12:10:57.124Z] 5408.67 IOPS, 21.13 MiB/s [2024-11-19T12:10:58.061Z] 5428.75 IOPS, 21.21 MiB/s [2024-11-19T12:10:58.998Z] 5442.40 IOPS, 21.26 MiB/s [2024-11-19T12:10:59.935Z] 5427.50 IOPS, 21.20 MiB/s [2024-11-19T12:11:01.311Z] 5449.00 IOPS, 21.29 MiB/s [2024-11-19T12:11:02.247Z] 5429.12 IOPS, 21.21 MiB/s [2024-11-19T12:11:03.184Z] 5433.67 IOPS, 21.23 MiB/s [2024-11-19T12:11:03.184Z] 5447.50 IOPS, 21.28 MiB/s 00:18:59.807 Latency(us) 00:18:59.807 [2024-11-19T12:11:03.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.807 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:59.807 Verification LBA range: start 0x0 length 0x2000 00:18:59.807 TLSTESTn1 : 10.01 5452.93 21.30 0.00 0.00 23439.09 5157.40 23478.98 00:18:59.807 [2024-11-19T12:11:03.184Z] =================================================================================================================== 00:18:59.807 [2024-11-19T12:11:03.184Z] Total : 5452.93 21.30 0.00 0.00 23439.09 5157.40 23478.98 00:18:59.807 { 00:18:59.807 "results": [ 00:18:59.807 { 00:18:59.807 "job": "TLSTESTn1", 00:18:59.807 "core_mask": "0x4", 00:18:59.807 "workload": "verify", 00:18:59.807 "status": "finished", 00:18:59.807 "verify_range": { 00:18:59.807 "start": 0, 00:18:59.807 "length": 8192 00:18:59.807 }, 00:18:59.807 "queue_depth": 128, 00:18:59.807 "io_size": 4096, 00:18:59.807 "runtime": 10.013331, 00:18:59.807 "iops": 5452.930698086381, 00:18:59.807 "mibps": 21.300510539399927, 00:18:59.807 "io_failed": 0, 00:18:59.807 "io_timeout": 0, 00:18:59.807 "avg_latency_us": 23439.09459166172, 00:18:59.807 "min_latency_us": 5157.398260869565, 00:18:59.807 "max_latency_us": 23478.98434782609 00:18:59.807 } 00:18:59.807 ], 00:18:59.807 "core_count": 1 00:18:59.807 } 00:18:59.807 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:59.807 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2862165 00:18:59.807 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2862165 ']' 00:18:59.807 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2862165 00:18:59.807 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:59.807 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:59.807 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2862165 00:18:59.807 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:59.807 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:59.807 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2862165' 00:18:59.807 killing process with pid 2862165 00:18:59.807 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2862165 00:18:59.807 Received shutdown signal, test time was about 10.000000 seconds 00:18:59.807 00:18:59.807 Latency(us) 00:18:59.807 [2024-11-19T12:11:03.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.807 [2024-11-19T12:11:03.184Z] =================================================================================================================== 00:18:59.807 [2024-11-19T12:11:03.184Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:59.808 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2862165 00:18:59.808 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.H3r2RAXB84 00:18:59.808 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H3r2RAXB84 00:18:59.808 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:59.808 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H3r2RAXB84 00:18:59.808 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:00.066 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:00.066 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:00.066 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:00.066 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H3r2RAXB84 00:19:00.066 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:00.066 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:00.067 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:00.067 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.H3r2RAXB84 00:19:00.067 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:00.067 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2864069 00:19:00.067 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:00.067 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:00.067 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2864069 /var/tmp/bdevperf.sock 00:19:00.067 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2864069 ']' 00:19:00.067 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:00.067 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.067 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:00.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:00.067 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.067 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.067 [2024-11-19 13:11:03.232579] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:00.067 [2024-11-19 13:11:03.232628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2864069 ] 00:19:00.067 [2024-11-19 13:11:03.305763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.067 [2024-11-19 13:11:03.343228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.067 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.067 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:00.067 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.H3r2RAXB84 00:19:00.325 [2024-11-19 13:11:03.612894] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.H3r2RAXB84': 0100666 00:19:00.325 [2024-11-19 13:11:03.612925] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:00.325 request: 00:19:00.325 { 00:19:00.325 "name": "key0", 00:19:00.325 "path": "/tmp/tmp.H3r2RAXB84", 00:19:00.325 "method": "keyring_file_add_key", 00:19:00.325 "req_id": 1 00:19:00.325 } 00:19:00.325 Got JSON-RPC error response 00:19:00.325 response: 00:19:00.325 { 00:19:00.325 "code": -1, 00:19:00.325 "message": "Operation not permitted" 00:19:00.325 } 00:19:00.325 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:00.584 [2024-11-19 13:11:03.817505] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:00.584 [2024-11-19 13:11:03.817534] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:00.584 request: 00:19:00.584 { 00:19:00.584 "name": "TLSTEST", 00:19:00.584 "trtype": "tcp", 00:19:00.584 "traddr": "10.0.0.2", 00:19:00.584 "adrfam": "ipv4", 00:19:00.584 "trsvcid": "4420", 00:19:00.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:00.584 "prchk_reftag": false, 00:19:00.584 "prchk_guard": false, 00:19:00.584 "hdgst": false, 00:19:00.584 "ddgst": false, 00:19:00.584 "psk": "key0", 00:19:00.584 "allow_unrecognized_csi": false, 00:19:00.584 "method": "bdev_nvme_attach_controller", 00:19:00.584 "req_id": 1 00:19:00.584 } 00:19:00.584 Got JSON-RPC error response 00:19:00.584 response: 00:19:00.584 { 00:19:00.584 "code": -126, 00:19:00.584 "message": "Required key not available" 00:19:00.584 } 00:19:00.584 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2864069 00:19:00.584 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2864069 ']' 00:19:00.584 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2864069 00:19:00.584 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:00.584 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.584 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2864069 00:19:00.584 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:00.584 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:00.584 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2864069' 00:19:00.584 killing process with pid 2864069 00:19:00.584 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2864069 00:19:00.585 Received shutdown signal, test time was about 10.000000 seconds 00:19:00.585 00:19:00.585 Latency(us) 00:19:00.585 [2024-11-19T12:11:03.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.585 [2024-11-19T12:11:03.962Z] =================================================================================================================== 00:19:00.585 [2024-11-19T12:11:03.962Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:00.585 13:11:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2864069 00:19:00.844 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:00.844 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:00.844 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:00.844 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:00.844 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:00.844 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2861749 00:19:00.844 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2861749 ']' 00:19:00.844 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2861749 00:19:00.844 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:00.844 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.844 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2861749 00:19:00.844 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:00.844 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:00.844 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2861749' 00:19:00.844 killing process with pid 2861749 00:19:00.844 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2861749 00:19:00.844 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2861749 00:19:01.110 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:01.110 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:01.110 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:01.110 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.110 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2864280 00:19:01.110 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2864280 00:19:01.110 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:01.110 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2864280 ']' 00:19:01.110 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.110 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.110 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.110 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.110 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.110 [2024-11-19 13:11:04.302252] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:01.110 [2024-11-19 13:11:04.302302] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.110 [2024-11-19 13:11:04.377428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.110 [2024-11-19 13:11:04.414304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.110 [2024-11-19 13:11:04.414339] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.110 [2024-11-19 13:11:04.414346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.110 [2024-11-19 13:11:04.414352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.110 [2024-11-19 13:11:04.414357] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.110 [2024-11-19 13:11:04.414905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.378 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.378 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:01.378 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:01.378 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:01.378 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.378 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.378 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.H3r2RAXB84 00:19:01.378 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:01.378 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.H3r2RAXB84 00:19:01.378 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:01.378 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.378 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:01.378 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.378 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.H3r2RAXB84 00:19:01.378 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.H3r2RAXB84 00:19:01.378 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:01.378 [2024-11-19 13:11:04.734024] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.637 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:01.637 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:01.896 [2024-11-19 13:11:05.131046] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:01.896 [2024-11-19 13:11:05.131245] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.896 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:02.155 malloc0 00:19:02.155 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:02.415 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.H3r2RAXB84 00:19:02.415 [2024-11-19 13:11:05.708579] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.H3r2RAXB84': 0100666 00:19:02.415 [2024-11-19 13:11:05.708605] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:02.415 request: 00:19:02.415 { 00:19:02.415 "name": "key0", 00:19:02.415 "path": "/tmp/tmp.H3r2RAXB84", 00:19:02.415 "method": "keyring_file_add_key", 00:19:02.415 "req_id": 1 00:19:02.415 } 00:19:02.415 Got JSON-RPC error response 00:19:02.415 response: 00:19:02.415 { 00:19:02.415 "code": -1, 00:19:02.415 "message": "Operation not permitted" 00:19:02.415 } 00:19:02.415 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:02.674 [2024-11-19 13:11:05.905114] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:02.674 [2024-11-19 13:11:05.905145] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:02.674 request: 00:19:02.674 { 00:19:02.674 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.674 "host": "nqn.2016-06.io.spdk:host1", 00:19:02.674 "psk": "key0", 00:19:02.674 "method": "nvmf_subsystem_add_host", 00:19:02.674 "req_id": 1 00:19:02.674 } 00:19:02.674 Got JSON-RPC error response 00:19:02.674 response: 00:19:02.674 { 00:19:02.674 "code": -32603, 00:19:02.674 "message": "Internal error" 00:19:02.674 } 00:19:02.674 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:02.674 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:02.674 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:02.674 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:02.674 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2864280 00:19:02.674 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2864280 ']' 00:19:02.674 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2864280 00:19:02.674 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:02.674 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.674 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2864280 00:19:02.674 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:02.674 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:02.674 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2864280' 00:19:02.674 killing process with pid 2864280 00:19:02.674 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2864280 00:19:02.674 13:11:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2864280 00:19:02.934 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.H3r2RAXB84 00:19:02.934 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:02.934 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:02.934 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:02.934 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.934 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2864577 00:19:02.934 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:02.934 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2864577 00:19:02.935 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2864577 ']' 00:19:02.935 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.935 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.935 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.935 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.935 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.935 [2024-11-19 13:11:06.203056] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:02.935 [2024-11-19 13:11:06.203101] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.935 [2024-11-19 13:11:06.278653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.194 [2024-11-19 13:11:06.315356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.194 [2024-11-19 13:11:06.315389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.194 [2024-11-19 13:11:06.315396] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.194 [2024-11-19 13:11:06.315402] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.194 [2024-11-19 13:11:06.315406] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.194 [2024-11-19 13:11:06.315993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.194 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:03.194 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:03.194 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:03.194 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:03.194 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.194 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.194 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.H3r2RAXB84 00:19:03.194 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.H3r2RAXB84 00:19:03.194 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:03.453 [2024-11-19 13:11:06.626927] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.453 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:03.712 13:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:03.712 [2024-11-19 13:11:07.007922] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:03.712 [2024-11-19 13:11:07.008133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.712 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:03.971 malloc0 00:19:03.971 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:04.230 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.H3r2RAXB84 00:19:04.230 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:04.489 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2864836 00:19:04.489 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:04.489 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:04.489 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2864836 /var/tmp/bdevperf.sock 00:19:04.489 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2864836 ']' 00:19:04.489 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.489 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.489 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.489 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.489 13:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.489 [2024-11-19 13:11:07.820052] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:04.489 [2024-11-19 13:11:07.820101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2864836 ] 00:19:04.748 [2024-11-19 13:11:07.894363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.748 [2024-11-19 13:11:07.934677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:04.748 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.748 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:04.748 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.H3r2RAXB84 00:19:05.007 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:05.266 [2024-11-19 13:11:08.413978] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:05.266 TLSTESTn1 00:19:05.266 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:05.526 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:05.526 "subsystems": [ 00:19:05.526 { 00:19:05.526 "subsystem": "keyring", 00:19:05.526 "config": [ 00:19:05.526 { 00:19:05.526 "method": "keyring_file_add_key", 00:19:05.526 "params": { 00:19:05.526 "name": "key0", 00:19:05.526 "path": "/tmp/tmp.H3r2RAXB84" 00:19:05.526 } 00:19:05.526 } 00:19:05.526 ] 00:19:05.526 }, 00:19:05.526 { 00:19:05.526 "subsystem": "iobuf", 00:19:05.526 "config": [ 00:19:05.526 { 00:19:05.526 "method": "iobuf_set_options", 00:19:05.526 "params": { 00:19:05.526 "small_pool_count": 8192, 00:19:05.526 "large_pool_count": 1024, 00:19:05.526 "small_bufsize": 8192, 00:19:05.526 "large_bufsize": 135168, 00:19:05.526 "enable_numa": false 00:19:05.526 } 00:19:05.526 } 00:19:05.526 ] 00:19:05.526 }, 00:19:05.526 { 00:19:05.526 "subsystem": "sock", 00:19:05.526 "config": [ 00:19:05.526 { 00:19:05.526 "method": "sock_set_default_impl", 00:19:05.526 "params": { 00:19:05.526 "impl_name": "posix" 00:19:05.526 } 00:19:05.526 }, 00:19:05.526 { 00:19:05.526 "method": "sock_impl_set_options", 00:19:05.526 "params": { 00:19:05.526 "impl_name": "ssl", 00:19:05.526 "recv_buf_size": 4096, 00:19:05.526 "send_buf_size": 4096, 00:19:05.526 "enable_recv_pipe": true, 00:19:05.526 "enable_quickack": false, 00:19:05.526 "enable_placement_id": 0, 00:19:05.526 "enable_zerocopy_send_server": true, 00:19:05.526 "enable_zerocopy_send_client": false, 00:19:05.526 "zerocopy_threshold": 0, 00:19:05.526 "tls_version": 0, 00:19:05.526 "enable_ktls": false 00:19:05.526 } 00:19:05.526 }, 00:19:05.526 { 00:19:05.526 "method": "sock_impl_set_options", 00:19:05.526 "params": { 00:19:05.526 "impl_name": "posix", 00:19:05.526 "recv_buf_size": 2097152, 00:19:05.526 "send_buf_size": 2097152, 00:19:05.526 "enable_recv_pipe": true, 00:19:05.526 "enable_quickack": false, 00:19:05.526 "enable_placement_id": 0, 00:19:05.526 "enable_zerocopy_send_server": true, 00:19:05.526 "enable_zerocopy_send_client": false, 00:19:05.526 "zerocopy_threshold": 0, 00:19:05.526 "tls_version": 0, 00:19:05.526 "enable_ktls": false 00:19:05.526 } 00:19:05.526 } 00:19:05.526 ] 00:19:05.526 }, 00:19:05.526 { 00:19:05.526 "subsystem": "vmd", 00:19:05.526 "config": [] 00:19:05.526 }, 00:19:05.526 { 00:19:05.526 "subsystem": "accel", 00:19:05.526 "config": [ 00:19:05.526 { 00:19:05.526 "method": "accel_set_options", 00:19:05.526 "params": { 00:19:05.526 "small_cache_size": 128, 00:19:05.526 "large_cache_size": 16, 00:19:05.526 "task_count": 2048, 00:19:05.526 "sequence_count": 2048, 00:19:05.526 "buf_count": 2048 00:19:05.526 } 00:19:05.526 } 00:19:05.526 ] 00:19:05.526 }, 00:19:05.526 { 00:19:05.526 "subsystem": "bdev", 00:19:05.526 "config": [ 00:19:05.526 { 00:19:05.526 "method": "bdev_set_options", 00:19:05.526 "params": { 00:19:05.526 "bdev_io_pool_size": 65535, 00:19:05.526 "bdev_io_cache_size": 256, 00:19:05.526 "bdev_auto_examine": true, 00:19:05.526 "iobuf_small_cache_size": 128, 00:19:05.526 "iobuf_large_cache_size": 16 00:19:05.526 } 00:19:05.526 }, 00:19:05.526 { 00:19:05.526 "method": "bdev_raid_set_options", 00:19:05.526 "params": { 00:19:05.526 "process_window_size_kb": 1024, 00:19:05.526 "process_max_bandwidth_mb_sec": 0 00:19:05.526 } 00:19:05.526 }, 00:19:05.526 { 00:19:05.526 "method": "bdev_iscsi_set_options", 00:19:05.526 "params": { 00:19:05.526 "timeout_sec": 30 00:19:05.526 } 00:19:05.526 }, 00:19:05.526 { 00:19:05.526 "method": "bdev_nvme_set_options", 00:19:05.526 "params": { 00:19:05.526 "action_on_timeout": "none", 00:19:05.526 "timeout_us": 0, 00:19:05.526 "timeout_admin_us": 0, 00:19:05.526 "keep_alive_timeout_ms": 10000, 00:19:05.526 "arbitration_burst": 0, 00:19:05.526 "low_priority_weight": 0, 00:19:05.526 "medium_priority_weight": 0, 00:19:05.526 "high_priority_weight": 0, 00:19:05.526 "nvme_adminq_poll_period_us": 10000, 00:19:05.526 "nvme_ioq_poll_period_us": 0, 00:19:05.526 "io_queue_requests": 0, 00:19:05.526 "delay_cmd_submit": true, 00:19:05.526 "transport_retry_count": 4, 00:19:05.526 "bdev_retry_count": 3, 00:19:05.526 "transport_ack_timeout": 0, 00:19:05.526 "ctrlr_loss_timeout_sec": 0, 00:19:05.526 "reconnect_delay_sec": 0, 00:19:05.526 "fast_io_fail_timeout_sec": 0, 00:19:05.526 "disable_auto_failback": false, 00:19:05.526 "generate_uuids": false, 00:19:05.526 "transport_tos": 0, 00:19:05.526 "nvme_error_stat": false, 00:19:05.526 "rdma_srq_size": 0, 00:19:05.526 "io_path_stat": false, 00:19:05.526 "allow_accel_sequence": false, 00:19:05.526 "rdma_max_cq_size": 0, 00:19:05.526 "rdma_cm_event_timeout_ms": 0, 00:19:05.526 "dhchap_digests": [ 00:19:05.526 "sha256", 00:19:05.526 "sha384", 00:19:05.526 "sha512" 00:19:05.526 ], 00:19:05.526 "dhchap_dhgroups": [ 00:19:05.526 "null", 00:19:05.526 "ffdhe2048", 00:19:05.526 "ffdhe3072", 00:19:05.526 "ffdhe4096", 00:19:05.526 "ffdhe6144", 00:19:05.526 "ffdhe8192" 00:19:05.526 ] 00:19:05.526 } 00:19:05.526 }, 00:19:05.526 { 00:19:05.526 "method": "bdev_nvme_set_hotplug", 00:19:05.526 "params": { 00:19:05.526 "period_us": 100000, 00:19:05.526 "enable": false 00:19:05.526 } 00:19:05.526 }, 00:19:05.526 { 00:19:05.526 "method": "bdev_malloc_create", 00:19:05.526 "params": { 00:19:05.526 "name": "malloc0", 00:19:05.526 "num_blocks": 8192, 00:19:05.527 "block_size": 4096, 00:19:05.527 "physical_block_size": 4096, 00:19:05.527 "uuid": "ab6e0164-37b0-440d-b378-9063f72d87eb", 00:19:05.527 "optimal_io_boundary": 0, 00:19:05.527 "md_size": 0, 00:19:05.527 "dif_type": 0, 00:19:05.527 "dif_is_head_of_md": false, 00:19:05.527 "dif_pi_format": 0 00:19:05.527 } 00:19:05.527 }, 00:19:05.527 { 00:19:05.527 "method": "bdev_wait_for_examine" 00:19:05.527 } 00:19:05.527 ] 00:19:05.527 }, 00:19:05.527 { 00:19:05.527 "subsystem": "nbd", 00:19:05.527 "config": [] 00:19:05.527 }, 00:19:05.527 { 00:19:05.527 "subsystem": "scheduler", 00:19:05.527 "config": [ 00:19:05.527 { 00:19:05.527 "method": "framework_set_scheduler", 00:19:05.527 "params": { 00:19:05.527 "name": "static" 00:19:05.527 } 00:19:05.527 } 00:19:05.527 ] 00:19:05.527 }, 00:19:05.527 { 00:19:05.527 "subsystem": "nvmf", 00:19:05.527 "config": [ 00:19:05.527 { 00:19:05.527 "method": "nvmf_set_config", 00:19:05.527 "params": { 00:19:05.527 "discovery_filter": "match_any", 00:19:05.527 "admin_cmd_passthru": { 00:19:05.527 "identify_ctrlr": false 00:19:05.527 }, 00:19:05.527 "dhchap_digests": [ 00:19:05.527 "sha256", 00:19:05.527 "sha384", 00:19:05.527 "sha512" 00:19:05.527 ], 00:19:05.527 "dhchap_dhgroups": [ 00:19:05.527 "null", 00:19:05.527 "ffdhe2048", 00:19:05.527 "ffdhe3072", 00:19:05.527 "ffdhe4096", 00:19:05.527 "ffdhe6144", 00:19:05.527 "ffdhe8192" 00:19:05.527 ] 00:19:05.527 } 00:19:05.527 }, 00:19:05.527 { 00:19:05.527 "method": "nvmf_set_max_subsystems", 00:19:05.527 "params": { 00:19:05.527 "max_subsystems": 1024 00:19:05.527 } 00:19:05.527 }, 00:19:05.527 { 00:19:05.527 "method": "nvmf_set_crdt", 00:19:05.527 "params": { 00:19:05.527 "crdt1": 0, 00:19:05.527 "crdt2": 0, 00:19:05.527 "crdt3": 0 00:19:05.527 } 00:19:05.527 }, 00:19:05.527 { 00:19:05.527 "method": "nvmf_create_transport", 00:19:05.527 "params": { 00:19:05.527 "trtype": "TCP", 00:19:05.527 "max_queue_depth": 128, 00:19:05.527 "max_io_qpairs_per_ctrlr": 127, 00:19:05.527 "in_capsule_data_size": 4096, 00:19:05.527 "max_io_size": 131072, 00:19:05.527 "io_unit_size": 131072, 00:19:05.527 "max_aq_depth": 128, 00:19:05.527 "num_shared_buffers": 511, 00:19:05.527 "buf_cache_size": 4294967295, 00:19:05.527 "dif_insert_or_strip": false, 00:19:05.527 "zcopy": false, 00:19:05.527 "c2h_success": false, 00:19:05.527 "sock_priority": 0, 00:19:05.527 "abort_timeout_sec": 1, 00:19:05.527 "ack_timeout": 0, 00:19:05.527 "data_wr_pool_size": 0 00:19:05.527 } 00:19:05.527 }, 00:19:05.527 { 00:19:05.527 "method": "nvmf_create_subsystem", 00:19:05.527 "params": { 00:19:05.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.527 "allow_any_host": false, 00:19:05.527 "serial_number": "SPDK00000000000001", 00:19:05.527 "model_number": "SPDK bdev Controller", 00:19:05.527 "max_namespaces": 10, 00:19:05.527 "min_cntlid": 1, 00:19:05.527 "max_cntlid": 65519, 00:19:05.527 "ana_reporting": false 00:19:05.527 } 00:19:05.527 }, 00:19:05.527 { 00:19:05.527 "method": "nvmf_subsystem_add_host", 00:19:05.527 "params": { 00:19:05.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.527 "host": "nqn.2016-06.io.spdk:host1", 00:19:05.527 "psk": "key0" 00:19:05.527 } 00:19:05.527 }, 00:19:05.527 { 00:19:05.527 "method": "nvmf_subsystem_add_ns", 00:19:05.527 "params": { 00:19:05.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.527 "namespace": { 00:19:05.527 "nsid": 1, 00:19:05.527 "bdev_name": "malloc0", 00:19:05.527 "nguid": "AB6E016437B0440DB3789063F72D87EB", 00:19:05.527 "uuid": "ab6e0164-37b0-440d-b378-9063f72d87eb", 00:19:05.527 "no_auto_visible": false 00:19:05.527 } 00:19:05.527 } 00:19:05.527 }, 00:19:05.527 { 00:19:05.527 "method": "nvmf_subsystem_add_listener", 00:19:05.527 "params": { 00:19:05.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.527 "listen_address": { 00:19:05.527 "trtype": "TCP", 00:19:05.527 "adrfam": "IPv4", 00:19:05.527 "traddr": "10.0.0.2", 00:19:05.527 "trsvcid": "4420" 00:19:05.527 }, 00:19:05.527 "secure_channel": true 00:19:05.527 } 00:19:05.527 } 00:19:05.527 ] 00:19:05.527 } 00:19:05.527 ] 00:19:05.527 }' 00:19:05.527 13:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:05.787 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:05.787 "subsystems": [ 00:19:05.787 { 00:19:05.787 "subsystem": "keyring", 00:19:05.787 "config": [ 00:19:05.787 { 00:19:05.787 "method": "keyring_file_add_key", 00:19:05.787 "params": { 00:19:05.787 "name": "key0", 00:19:05.787 "path": "/tmp/tmp.H3r2RAXB84" 00:19:05.787 } 00:19:05.787 } 00:19:05.787 ] 00:19:05.787 }, 00:19:05.787 { 00:19:05.787 "subsystem": "iobuf", 00:19:05.787 "config": [ 00:19:05.787 { 00:19:05.787 "method": "iobuf_set_options", 00:19:05.787 "params": { 00:19:05.787 "small_pool_count": 8192, 00:19:05.787 "large_pool_count": 1024, 00:19:05.787 "small_bufsize": 8192, 00:19:05.787 "large_bufsize": 135168, 00:19:05.787 "enable_numa": false 00:19:05.787 } 00:19:05.787 } 00:19:05.787 ] 00:19:05.787 }, 00:19:05.787 { 00:19:05.787 "subsystem": "sock", 00:19:05.787 "config": [ 00:19:05.787 { 00:19:05.787 "method": "sock_set_default_impl", 00:19:05.787 "params": { 00:19:05.787 "impl_name": "posix" 00:19:05.787 } 00:19:05.787 }, 00:19:05.787 { 00:19:05.787 "method": "sock_impl_set_options", 00:19:05.787 "params": { 00:19:05.787 "impl_name": "ssl", 00:19:05.787 "recv_buf_size": 4096, 00:19:05.787 "send_buf_size": 4096, 00:19:05.787 "enable_recv_pipe": true, 00:19:05.787 "enable_quickack": false, 00:19:05.787 "enable_placement_id": 0, 00:19:05.787 "enable_zerocopy_send_server": true, 00:19:05.787 "enable_zerocopy_send_client": false, 00:19:05.787 "zerocopy_threshold": 0, 00:19:05.787 "tls_version": 0, 00:19:05.787 "enable_ktls": false 00:19:05.787 } 00:19:05.787 }, 00:19:05.787 { 00:19:05.787 "method": "sock_impl_set_options", 00:19:05.787 "params": { 00:19:05.787 "impl_name": "posix", 00:19:05.787 "recv_buf_size": 2097152, 00:19:05.787 "send_buf_size": 2097152, 00:19:05.787 "enable_recv_pipe": true, 00:19:05.787 "enable_quickack": false, 00:19:05.787 "enable_placement_id": 0, 00:19:05.787 "enable_zerocopy_send_server": true, 00:19:05.787 "enable_zerocopy_send_client": false, 00:19:05.787 "zerocopy_threshold": 0, 00:19:05.787 "tls_version": 0, 00:19:05.787 "enable_ktls": false 00:19:05.787 } 00:19:05.787 } 00:19:05.787 ] 00:19:05.787 }, 00:19:05.787 { 00:19:05.787 "subsystem": "vmd", 00:19:05.787 "config": [] 00:19:05.787 }, 00:19:05.787 { 00:19:05.787 "subsystem": "accel", 00:19:05.787 "config": [ 00:19:05.787 { 00:19:05.787 "method": "accel_set_options", 00:19:05.787 "params": { 00:19:05.787 "small_cache_size": 128, 00:19:05.787 "large_cache_size": 16, 00:19:05.787 "task_count": 2048, 00:19:05.787 "sequence_count": 2048, 00:19:05.787 "buf_count": 2048 00:19:05.787 } 00:19:05.787 } 00:19:05.787 ] 00:19:05.787 }, 00:19:05.787 { 00:19:05.787 "subsystem": "bdev", 00:19:05.787 "config": [ 00:19:05.787 { 00:19:05.787 "method": "bdev_set_options", 00:19:05.787 "params": { 00:19:05.787 "bdev_io_pool_size": 65535, 00:19:05.787 "bdev_io_cache_size": 256, 00:19:05.787 "bdev_auto_examine": true, 00:19:05.787 "iobuf_small_cache_size": 128, 00:19:05.787 "iobuf_large_cache_size": 16 00:19:05.787 } 00:19:05.787 }, 00:19:05.787 { 00:19:05.787 "method": "bdev_raid_set_options", 00:19:05.787 "params": { 00:19:05.787 "process_window_size_kb": 1024, 00:19:05.787 "process_max_bandwidth_mb_sec": 0 00:19:05.787 } 00:19:05.787 }, 00:19:05.787 { 00:19:05.787 "method": "bdev_iscsi_set_options", 00:19:05.787 "params": { 00:19:05.787 "timeout_sec": 30 00:19:05.787 } 00:19:05.787 }, 00:19:05.787 { 00:19:05.787 "method": "bdev_nvme_set_options", 00:19:05.787 "params": { 00:19:05.787 "action_on_timeout": "none", 00:19:05.787 "timeout_us": 0, 00:19:05.788 "timeout_admin_us": 0, 00:19:05.788 "keep_alive_timeout_ms": 10000, 00:19:05.788 "arbitration_burst": 0, 00:19:05.788 "low_priority_weight": 0, 00:19:05.788 "medium_priority_weight": 0, 00:19:05.788 "high_priority_weight": 0, 00:19:05.788 "nvme_adminq_poll_period_us": 10000, 00:19:05.788 "nvme_ioq_poll_period_us": 0, 00:19:05.788 "io_queue_requests": 512, 00:19:05.788 "delay_cmd_submit": true, 00:19:05.788 "transport_retry_count": 4, 00:19:05.788 "bdev_retry_count": 3, 00:19:05.788 "transport_ack_timeout": 0, 00:19:05.788 "ctrlr_loss_timeout_sec": 0, 00:19:05.788 "reconnect_delay_sec": 0, 00:19:05.788 "fast_io_fail_timeout_sec": 0, 00:19:05.788 "disable_auto_failback": false, 00:19:05.788 "generate_uuids": false, 00:19:05.788 "transport_tos": 0, 00:19:05.788 "nvme_error_stat": false, 00:19:05.788 "rdma_srq_size": 0, 00:19:05.788 "io_path_stat": false, 00:19:05.788 "allow_accel_sequence": false, 00:19:05.788 "rdma_max_cq_size": 0, 00:19:05.788 "rdma_cm_event_timeout_ms": 0, 00:19:05.788 "dhchap_digests": [ 00:19:05.788 "sha256", 00:19:05.788 "sha384", 00:19:05.788 "sha512" 00:19:05.788 ], 00:19:05.788 "dhchap_dhgroups": [ 00:19:05.788 "null", 00:19:05.788 "ffdhe2048", 00:19:05.788 "ffdhe3072", 00:19:05.788 "ffdhe4096", 00:19:05.788 "ffdhe6144", 00:19:05.788 "ffdhe8192" 00:19:05.788 ] 00:19:05.788 } 00:19:05.788 }, 00:19:05.788 { 00:19:05.788 "method": "bdev_nvme_attach_controller", 00:19:05.788 "params": { 00:19:05.788 "name": "TLSTEST", 00:19:05.788 "trtype": "TCP", 00:19:05.788 "adrfam": "IPv4", 00:19:05.788 "traddr": "10.0.0.2", 00:19:05.788 "trsvcid": "4420", 00:19:05.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.788 "prchk_reftag": false, 00:19:05.788 "prchk_guard": false, 00:19:05.788 "ctrlr_loss_timeout_sec": 0, 00:19:05.788 "reconnect_delay_sec": 0, 00:19:05.788 "fast_io_fail_timeout_sec": 0, 00:19:05.788 "psk": "key0", 00:19:05.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:05.788 "hdgst": false, 00:19:05.788 "ddgst": false, 00:19:05.788 "multipath": "multipath" 00:19:05.788 } 00:19:05.788 }, 00:19:05.788 { 00:19:05.788 "method": "bdev_nvme_set_hotplug", 00:19:05.788 "params": { 00:19:05.788 "period_us": 100000, 00:19:05.788 "enable": false 00:19:05.788 } 00:19:05.788 }, 00:19:05.788 { 00:19:05.788 "method": "bdev_wait_for_examine" 00:19:05.788 } 00:19:05.788 ] 00:19:05.788 }, 00:19:05.788 { 00:19:05.788 "subsystem": "nbd", 00:19:05.788 "config": [] 00:19:05.788 } 00:19:05.788 ] 00:19:05.788 }' 00:19:05.788 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2864836 00:19:05.788 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2864836 ']' 00:19:05.788 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2864836 00:19:05.788 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:05.788 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.788 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2864836 00:19:05.788 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:05.788 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:05.788 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2864836' 00:19:05.788 killing process with pid 2864836 00:19:05.788 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2864836 00:19:05.788 Received shutdown signal, test time was about 10.000000 seconds 00:19:05.788 00:19:05.788 Latency(us) 00:19:05.788 [2024-11-19T12:11:09.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.788 [2024-11-19T12:11:09.165Z] =================================================================================================================== 00:19:05.788 [2024-11-19T12:11:09.165Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:05.788 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2864836 00:19:06.046 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2864577 00:19:06.046 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2864577 ']' 00:19:06.046 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2864577 00:19:06.046 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:06.046 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:06.046 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2864577 00:19:06.046 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:06.046 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:06.046 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2864577' 00:19:06.046 killing process with pid 2864577 00:19:06.046 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2864577 00:19:06.046 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2864577 00:19:06.305 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:06.305 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:06.305 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:06.305 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.305 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:06.305 "subsystems": [ 00:19:06.305 { 00:19:06.305 "subsystem": "keyring", 00:19:06.305 "config": [ 00:19:06.305 { 00:19:06.305 "method": "keyring_file_add_key", 00:19:06.305 "params": { 00:19:06.305 "name": "key0", 00:19:06.305 "path": "/tmp/tmp.H3r2RAXB84" 00:19:06.305 } 00:19:06.305 } 00:19:06.305 ] 00:19:06.305 }, 00:19:06.305 { 00:19:06.305 "subsystem": "iobuf", 00:19:06.305 "config": [ 00:19:06.305 { 00:19:06.305 "method": "iobuf_set_options", 00:19:06.305 "params": { 00:19:06.305 "small_pool_count": 8192, 00:19:06.305 "large_pool_count": 1024, 00:19:06.305 "small_bufsize": 8192, 00:19:06.305 "large_bufsize": 135168, 00:19:06.305 "enable_numa": false 00:19:06.305 } 00:19:06.305 } 00:19:06.305 ] 00:19:06.305 }, 00:19:06.305 { 00:19:06.305 "subsystem": "sock", 00:19:06.305 "config": [ 00:19:06.305 { 00:19:06.305 "method": "sock_set_default_impl", 00:19:06.305 "params": { 00:19:06.305 "impl_name": "posix" 00:19:06.305 } 00:19:06.305 }, 00:19:06.305 { 00:19:06.305 "method": "sock_impl_set_options", 00:19:06.305 "params": { 00:19:06.305 "impl_name": "ssl", 00:19:06.305 "recv_buf_size": 4096, 00:19:06.305 "send_buf_size": 4096, 00:19:06.305 "enable_recv_pipe": true, 00:19:06.305 "enable_quickack": false, 00:19:06.305 "enable_placement_id": 0, 00:19:06.305 "enable_zerocopy_send_server": true, 00:19:06.305 "enable_zerocopy_send_client": false, 00:19:06.305 "zerocopy_threshold": 0, 00:19:06.305 "tls_version": 0, 00:19:06.305 "enable_ktls": false 00:19:06.305 } 00:19:06.305 }, 00:19:06.305 { 00:19:06.305 "method": "sock_impl_set_options", 00:19:06.305 "params": { 00:19:06.305 "impl_name": "posix", 00:19:06.305 "recv_buf_size": 2097152, 00:19:06.305 "send_buf_size": 2097152, 00:19:06.305 "enable_recv_pipe": true, 00:19:06.305 "enable_quickack": false, 00:19:06.305 "enable_placement_id": 0, 00:19:06.305 "enable_zerocopy_send_server": true, 00:19:06.305 "enable_zerocopy_send_client": false, 00:19:06.305 "zerocopy_threshold": 0, 00:19:06.305 "tls_version": 0, 00:19:06.305 "enable_ktls": false 00:19:06.305 } 00:19:06.305 } 00:19:06.305 ] 00:19:06.305 }, 00:19:06.305 { 00:19:06.305 "subsystem": "vmd", 00:19:06.305 "config": [] 00:19:06.305 }, 00:19:06.305 { 00:19:06.305 "subsystem": "accel", 00:19:06.305 "config": [ 00:19:06.305 { 00:19:06.305 "method": "accel_set_options", 00:19:06.305 "params": { 00:19:06.305 "small_cache_size": 128, 00:19:06.305 "large_cache_size": 16, 00:19:06.305 "task_count": 2048, 00:19:06.305 "sequence_count": 2048, 00:19:06.305 "buf_count": 2048 00:19:06.305 } 00:19:06.305 } 00:19:06.305 ] 00:19:06.305 }, 00:19:06.305 { 00:19:06.305 "subsystem": "bdev", 00:19:06.305 "config": [ 00:19:06.305 { 00:19:06.305 "method": "bdev_set_options", 00:19:06.305 "params": { 00:19:06.305 "bdev_io_pool_size": 65535, 00:19:06.305 "bdev_io_cache_size": 256, 00:19:06.305 "bdev_auto_examine": true, 00:19:06.305 "iobuf_small_cache_size": 128, 00:19:06.305 "iobuf_large_cache_size": 16 00:19:06.305 } 00:19:06.305 }, 00:19:06.305 { 00:19:06.305 "method": "bdev_raid_set_options", 00:19:06.305 "params": { 00:19:06.305 "process_window_size_kb": 1024, 00:19:06.305 "process_max_bandwidth_mb_sec": 0 00:19:06.305 } 00:19:06.305 }, 00:19:06.305 { 00:19:06.305 "method": "bdev_iscsi_set_options", 00:19:06.305 "params": { 00:19:06.305 "timeout_sec": 30 00:19:06.305 } 00:19:06.305 }, 00:19:06.305 { 00:19:06.305 "method": "bdev_nvme_set_options", 00:19:06.305 "params": { 00:19:06.305 "action_on_timeout": "none", 00:19:06.305 "timeout_us": 0, 00:19:06.305 "timeout_admin_us": 0, 00:19:06.305 "keep_alive_timeout_ms": 10000, 00:19:06.305 "arbitration_burst": 0, 00:19:06.305 "low_priority_weight": 0, 00:19:06.305 "medium_priority_weight": 0, 00:19:06.305 "high_priority_weight": 0, 00:19:06.305 "nvme_adminq_poll_period_us": 10000, 00:19:06.305 "nvme_ioq_poll_period_us": 0, 00:19:06.305 "io_queue_requests": 0, 00:19:06.305 "delay_cmd_submit": true, 00:19:06.305 "transport_retry_count": 4, 00:19:06.305 "bdev_retry_count": 3, 00:19:06.305 "transport_ack_timeout": 0, 00:19:06.305 "ctrlr_loss_timeout_sec": 0, 00:19:06.305 "reconnect_delay_sec": 0, 00:19:06.305 "fast_io_fail_timeout_sec": 0, 00:19:06.305 "disable_auto_failback": false, 00:19:06.305 "generate_uuids": false, 00:19:06.305 "transport_tos": 0, 00:19:06.305 "nvme_error_stat": false, 00:19:06.305 "rdma_srq_size": 0, 00:19:06.305 "io_path_stat": false, 00:19:06.305 "allow_accel_sequence": false, 00:19:06.305 "rdma_max_cq_size": 0, 00:19:06.305 "rdma_cm_event_timeout_ms": 0, 00:19:06.305 "dhchap_digests": [ 00:19:06.305 "sha256", 00:19:06.305 "sha384", 00:19:06.305 "sha512" 00:19:06.305 ], 00:19:06.305 "dhchap_dhgroups": [ 00:19:06.305 "null", 00:19:06.305 "ffdhe2048", 00:19:06.305 "ffdhe3072", 00:19:06.305 "ffdhe4096", 00:19:06.305 "ffdhe6144", 00:19:06.305 "ffdhe8192" 00:19:06.305 ] 00:19:06.305 } 00:19:06.305 }, 00:19:06.305 { 00:19:06.305 "method": "bdev_nvme_set_hotplug", 00:19:06.305 "params": { 00:19:06.305 "period_us": 100000, 00:19:06.305 "enable": false 00:19:06.305 } 00:19:06.305 }, 00:19:06.305 { 00:19:06.305 "method": "bdev_malloc_create", 00:19:06.305 "params": { 00:19:06.305 "name": "malloc0", 00:19:06.305 "num_blocks": 8192, 00:19:06.305 "block_size": 4096, 00:19:06.305 "physical_block_size": 4096, 00:19:06.305 "uuid": "ab6e0164-37b0-440d-b378-9063f72d87eb", 00:19:06.305 "optimal_io_boundary": 0, 00:19:06.305 "md_size": 0, 00:19:06.305 "dif_type": 0, 00:19:06.305 "dif_is_head_of_md": false, 00:19:06.305 "dif_pi_format": 0 00:19:06.305 } 00:19:06.306 }, 00:19:06.306 { 00:19:06.306 "method": "bdev_wait_for_examine" 00:19:06.306 } 00:19:06.306 ] 00:19:06.306 }, 00:19:06.306 { 00:19:06.306 "subsystem": "nbd", 00:19:06.306 "config": [] 00:19:06.306 }, 00:19:06.306 { 00:19:06.306 "subsystem": "scheduler", 00:19:06.306 "config": [ 00:19:06.306 { 00:19:06.306 "method": "framework_set_scheduler", 00:19:06.306 "params": { 00:19:06.306 "name": "static" 00:19:06.306 } 00:19:06.306 } 00:19:06.306 ] 00:19:06.306 }, 00:19:06.306 { 00:19:06.306 "subsystem": "nvmf", 00:19:06.306 "config": [ 00:19:06.306 { 00:19:06.306 "method": "nvmf_set_config", 00:19:06.306 "params": { 00:19:06.306 "discovery_filter": "match_any", 00:19:06.306 "admin_cmd_passthru": { 00:19:06.306 "identify_ctrlr": false 00:19:06.306 }, 00:19:06.306 "dhchap_digests": [ 00:19:06.306 "sha256", 00:19:06.306 "sha384", 00:19:06.306 "sha512" 00:19:06.306 ], 00:19:06.306 "dhchap_dhgroups": [ 00:19:06.306 "null", 00:19:06.306 "ffdhe2048", 00:19:06.306 "ffdhe3072", 00:19:06.306 "ffdhe4096", 00:19:06.306 "ffdhe6144", 00:19:06.306 "ffdhe8192" 00:19:06.306 ] 00:19:06.306 } 00:19:06.306 }, 00:19:06.306 { 00:19:06.306 "method": "nvmf_set_max_subsystems", 00:19:06.306 "params": { 00:19:06.306 "max_subsystems": 1024 00:19:06.306 } 00:19:06.306 }, 00:19:06.306 { 00:19:06.306 "method": "nvmf_set_crdt", 00:19:06.306 "params": { 00:19:06.306 "crdt1": 0, 00:19:06.306 "crdt2": 0, 00:19:06.306 "crdt3": 0 00:19:06.306 } 00:19:06.306 }, 00:19:06.306 { 00:19:06.306 "method": "nvmf_create_transport", 00:19:06.306 "params": { 00:19:06.306 "trtype": "TCP", 00:19:06.306 "max_queue_depth": 128, 00:19:06.306 "max_io_qpairs_per_ctrlr": 127, 00:19:06.306 "in_capsule_data_size": 4096, 00:19:06.306 "max_io_size": 131072, 00:19:06.306 "io_unit_size": 131072, 00:19:06.306 "max_aq_depth": 128, 00:19:06.306 "num_shared_buffers": 511, 00:19:06.306 "buf_cache_size": 4294967295, 00:19:06.306 "dif_insert_or_strip": false, 00:19:06.306 "zcopy": false, 00:19:06.306 "c2h_success": false, 00:19:06.306 "sock_priority": 0, 00:19:06.306 "abort_timeout_sec": 1, 00:19:06.306 "ack_timeout": 0, 00:19:06.306 "data_wr_pool_size": 0 00:19:06.306 } 00:19:06.306 }, 00:19:06.306 { 00:19:06.306 "method": "nvmf_create_subsystem", 00:19:06.306 "params": { 00:19:06.306 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.306 "allow_any_host": false, 00:19:06.306 "serial_number": "SPDK00000000000001", 00:19:06.306 "model_number": "SPDK bdev Controller", 00:19:06.306 "max_namespaces": 10, 00:19:06.306 "min_cntlid": 1, 00:19:06.306 "max_cntlid": 65519, 00:19:06.306 "ana_reporting": false 00:19:06.306 } 00:19:06.306 }, 00:19:06.306 { 00:19:06.306 "method": "nvmf_subsystem_add_host", 00:19:06.306 "params": { 00:19:06.306 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.306 "host": "nqn.2016-06.io.spdk:host1", 00:19:06.306 "psk": "key0" 00:19:06.306 } 00:19:06.306 }, 00:19:06.306 { 00:19:06.306 "method": "nvmf_subsystem_add_ns", 00:19:06.306 "params": { 00:19:06.306 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.306 "namespace": { 00:19:06.306 "nsid": 1, 00:19:06.306 "bdev_name": "malloc0", 00:19:06.306 "nguid": "AB6E016437B0440DB3789063F72D87EB", 00:19:06.306 "uuid": "ab6e0164-37b0-440d-b378-9063f72d87eb", 00:19:06.306 "no_auto_visible": false 00:19:06.306 } 00:19:06.306 } 00:19:06.306 }, 00:19:06.306 { 00:19:06.306 "method": "nvmf_subsystem_add_listener", 00:19:06.306 "params": { 00:19:06.306 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.306 "listen_address": { 00:19:06.306 "trtype": "TCP", 00:19:06.306 "adrfam": "IPv4", 00:19:06.306 "traddr": "10.0.0.2", 00:19:06.306 "trsvcid": "4420" 00:19:06.306 }, 00:19:06.306 "secure_channel": true 00:19:06.306 } 00:19:06.306 } 00:19:06.306 ] 00:19:06.306 } 00:19:06.306 ] 00:19:06.306 }' 00:19:06.306 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2865084 00:19:06.306 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:06.306 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2865084 00:19:06.306 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2865084 ']' 00:19:06.306 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.306 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.306 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.306 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.306 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.306 [2024-11-19 13:11:09.526846] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:06.306 [2024-11-19 13:11:09.526895] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.306 [2024-11-19 13:11:09.607925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.306 [2024-11-19 13:11:09.648480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.306 [2024-11-19 13:11:09.648518] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.306 [2024-11-19 13:11:09.648526] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.306 [2024-11-19 13:11:09.648531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.306 [2024-11-19 13:11:09.648536] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.306 [2024-11-19 13:11:09.649146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.565 [2024-11-19 13:11:09.861998] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.565 [2024-11-19 13:11:09.894041] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:06.565 [2024-11-19 13:11:09.894233] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.134 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.134 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:07.134 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:07.134 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:07.134 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.134 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.134 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2865332 00:19:07.134 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2865332 /var/tmp/bdevperf.sock 00:19:07.134 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2865332 ']' 00:19:07.134 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:07.134 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:07.134 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.134 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:07.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:07.134 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:07.134 "subsystems": [ 00:19:07.134 { 00:19:07.134 "subsystem": "keyring", 00:19:07.134 "config": [ 00:19:07.134 { 00:19:07.134 "method": "keyring_file_add_key", 00:19:07.134 "params": { 00:19:07.134 "name": "key0", 00:19:07.134 "path": "/tmp/tmp.H3r2RAXB84" 00:19:07.134 } 00:19:07.134 } 00:19:07.134 ] 00:19:07.134 }, 00:19:07.134 { 00:19:07.134 "subsystem": "iobuf", 00:19:07.134 "config": [ 00:19:07.134 { 00:19:07.134 "method": "iobuf_set_options", 00:19:07.134 "params": { 00:19:07.134 "small_pool_count": 8192, 00:19:07.134 "large_pool_count": 1024, 00:19:07.134 "small_bufsize": 8192, 00:19:07.134 "large_bufsize": 135168, 00:19:07.134 "enable_numa": false 00:19:07.134 } 00:19:07.134 } 00:19:07.134 ] 00:19:07.134 }, 00:19:07.134 { 00:19:07.134 "subsystem": "sock", 00:19:07.134 "config": [ 00:19:07.134 { 00:19:07.134 "method": "sock_set_default_impl", 00:19:07.134 "params": { 00:19:07.134 "impl_name": "posix" 00:19:07.134 } 00:19:07.134 }, 00:19:07.134 { 00:19:07.134 "method": "sock_impl_set_options", 00:19:07.134 "params": { 00:19:07.134 "impl_name": "ssl", 00:19:07.134 "recv_buf_size": 4096, 00:19:07.134 "send_buf_size": 4096, 00:19:07.134 "enable_recv_pipe": true, 00:19:07.134 "enable_quickack": false, 00:19:07.134 "enable_placement_id": 0, 00:19:07.134 "enable_zerocopy_send_server": true, 00:19:07.134 "enable_zerocopy_send_client": false, 00:19:07.134 "zerocopy_threshold": 0, 00:19:07.134 "tls_version": 0, 00:19:07.134 "enable_ktls": false 00:19:07.134 } 00:19:07.134 }, 00:19:07.134 { 00:19:07.134 "method": "sock_impl_set_options", 00:19:07.134 "params": { 00:19:07.134 "impl_name": "posix", 00:19:07.134 "recv_buf_size": 2097152, 00:19:07.134 "send_buf_size": 2097152, 00:19:07.134 "enable_recv_pipe": true, 00:19:07.134 "enable_quickack": false, 00:19:07.134 "enable_placement_id": 0, 00:19:07.134 "enable_zerocopy_send_server": true, 00:19:07.134 "enable_zerocopy_send_client": false, 00:19:07.134 "zerocopy_threshold": 0, 00:19:07.134 "tls_version": 0, 00:19:07.134 "enable_ktls": false 00:19:07.134 } 00:19:07.134 } 00:19:07.134 ] 00:19:07.134 }, 00:19:07.134 { 00:19:07.134 "subsystem": "vmd", 00:19:07.134 "config": [] 00:19:07.134 }, 00:19:07.134 { 00:19:07.134 "subsystem": "accel", 00:19:07.134 "config": [ 00:19:07.134 { 00:19:07.134 "method": "accel_set_options", 00:19:07.134 "params": { 00:19:07.134 "small_cache_size": 128, 00:19:07.134 "large_cache_size": 16, 00:19:07.134 "task_count": 2048, 00:19:07.134 "sequence_count": 2048, 00:19:07.134 "buf_count": 2048 00:19:07.134 } 00:19:07.134 } 00:19:07.134 ] 00:19:07.134 }, 00:19:07.134 { 00:19:07.134 "subsystem": "bdev", 00:19:07.134 "config": [ 00:19:07.134 { 00:19:07.134 "method": "bdev_set_options", 00:19:07.134 "params": { 00:19:07.134 "bdev_io_pool_size": 65535, 00:19:07.134 "bdev_io_cache_size": 256, 00:19:07.134 "bdev_auto_examine": true, 00:19:07.134 "iobuf_small_cache_size": 128, 00:19:07.134 "iobuf_large_cache_size": 16 00:19:07.134 } 00:19:07.134 }, 00:19:07.134 { 00:19:07.134 "method": "bdev_raid_set_options", 00:19:07.134 "params": { 00:19:07.134 "process_window_size_kb": 1024, 00:19:07.134 "process_max_bandwidth_mb_sec": 0 00:19:07.134 } 00:19:07.134 }, 00:19:07.134 { 00:19:07.134 "method": "bdev_iscsi_set_options", 00:19:07.134 "params": { 00:19:07.134 "timeout_sec": 30 00:19:07.134 } 00:19:07.134 }, 00:19:07.134 { 00:19:07.134 "method": "bdev_nvme_set_options", 00:19:07.134 "params": { 00:19:07.134 "action_on_timeout": "none", 00:19:07.134 "timeout_us": 0, 00:19:07.134 "timeout_admin_us": 0, 00:19:07.134 "keep_alive_timeout_ms": 10000, 00:19:07.134 "arbitration_burst": 0, 00:19:07.134 "low_priority_weight": 0, 00:19:07.134 "medium_priority_weight": 0, 00:19:07.134 "high_priority_weight": 0, 00:19:07.134 "nvme_adminq_poll_period_us": 10000, 00:19:07.134 "nvme_ioq_poll_period_us": 0, 00:19:07.134 "io_queue_requests": 512, 00:19:07.134 "delay_cmd_submit": true, 00:19:07.134 "transport_retry_count": 4, 00:19:07.134 "bdev_retry_count": 3, 00:19:07.134 "transport_ack_timeout": 0, 00:19:07.134 "ctrlr_loss_timeout_sec": 0, 00:19:07.134 "reconnect_delay_sec": 0, 00:19:07.134 "fast_io_fail_timeout_sec": 0, 00:19:07.134 "disable_auto_failback": false, 00:19:07.134 "generate_uuids": false, 00:19:07.134 "transport_tos": 0, 00:19:07.134 "nvme_error_stat": false, 00:19:07.134 "rdma_srq_size": 0, 00:19:07.134 "io_path_stat": false, 00:19:07.134 "allow_accel_sequence": false, 00:19:07.134 "rdma_max_cq_size": 0, 00:19:07.134 "rdma_cm_event_timeout_ms": 0, 00:19:07.134 "dhchap_digests": [ 00:19:07.134 "sha256", 00:19:07.134 "sha384", 00:19:07.134 "sha512" 00:19:07.134 ], 00:19:07.134 "dhchap_dhgroups": [ 00:19:07.134 "null", 00:19:07.134 "ffdhe2048", 00:19:07.134 "ffdhe3072", 00:19:07.134 "ffdhe4096", 00:19:07.134 "ffdhe6144", 00:19:07.134 "ffdhe8192" 00:19:07.134 ] 00:19:07.134 } 00:19:07.134 }, 00:19:07.135 { 00:19:07.135 "method": "bdev_nvme_attach_controller", 00:19:07.135 "params": { 00:19:07.135 "name": "TLSTEST", 00:19:07.135 "trtype": "TCP", 00:19:07.135 "adrfam": "IPv4", 00:19:07.135 "traddr": "10.0.0.2", 00:19:07.135 "trsvcid": "4420", 00:19:07.135 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.135 "prchk_reftag": false, 00:19:07.135 "prchk_guard": false, 00:19:07.135 "ctrlr_loss_timeout_sec": 0, 00:19:07.135 "reconnect_delay_sec": 0, 00:19:07.135 "fast_io_fail_timeout_sec": 0, 00:19:07.135 "psk": "key0", 00:19:07.135 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:07.135 "hdgst": false, 00:19:07.135 "ddgst": false, 00:19:07.135 "multipath": "multipath" 00:19:07.135 } 00:19:07.135 }, 00:19:07.135 { 00:19:07.135 "method": "bdev_nvme_set_hotplug", 00:19:07.135 "params": { 00:19:07.135 "period_us": 100000, 00:19:07.135 "enable": false 00:19:07.135 } 00:19:07.135 }, 00:19:07.135 { 00:19:07.135 "method": "bdev_wait_for_examine" 00:19:07.135 } 00:19:07.135 ] 00:19:07.135 }, 00:19:07.135 { 00:19:07.135 "subsystem": "nbd", 00:19:07.135 "config": [] 00:19:07.135 } 00:19:07.135 ] 00:19:07.135 }' 00:19:07.135 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.135 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.135 [2024-11-19 13:11:10.443936] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:07.135 [2024-11-19 13:11:10.443990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2865332 ] 00:19:07.394 [2024-11-19 13:11:10.517721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.394 [2024-11-19 13:11:10.560657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.394 [2024-11-19 13:11:10.712473] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:07.963 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.963 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:07.963 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:08.222 Running I/O for 10 seconds... 00:19:10.097 5154.00 IOPS, 20.13 MiB/s [2024-11-19T12:11:14.412Z] 5295.00 IOPS, 20.68 MiB/s [2024-11-19T12:11:15.790Z] 5378.67 IOPS, 21.01 MiB/s [2024-11-19T12:11:16.728Z] 5409.50 IOPS, 21.13 MiB/s [2024-11-19T12:11:17.664Z] 5430.60 IOPS, 21.21 MiB/s [2024-11-19T12:11:18.600Z] 5415.50 IOPS, 21.15 MiB/s [2024-11-19T12:11:19.536Z] 5420.43 IOPS, 21.17 MiB/s [2024-11-19T12:11:20.474Z] 5408.62 IOPS, 21.13 MiB/s [2024-11-19T12:11:21.854Z] 5410.67 IOPS, 21.14 MiB/s [2024-11-19T12:11:21.854Z] 5417.40 IOPS, 21.16 MiB/s 00:19:18.477 Latency(us) 00:19:18.477 [2024-11-19T12:11:21.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.477 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:18.477 Verification LBA range: start 0x0 length 0x2000 00:19:18.477 TLSTESTn1 : 10.04 5409.86 21.13 0.00 0.00 23605.32 5955.23 36928.11 00:19:18.477 [2024-11-19T12:11:21.854Z] =================================================================================================================== 00:19:18.477 [2024-11-19T12:11:21.854Z] Total : 5409.86 21.13 0.00 0.00 23605.32 5955.23 36928.11 00:19:18.477 { 00:19:18.477 "results": [ 00:19:18.477 { 00:19:18.477 "job": "TLSTESTn1", 00:19:18.477 "core_mask": "0x4", 00:19:18.477 "workload": "verify", 00:19:18.477 "status": "finished", 00:19:18.477 "verify_range": { 00:19:18.477 "start": 0, 00:19:18.477 "length": 8192 00:19:18.477 }, 00:19:18.477 "queue_depth": 128, 00:19:18.477 "io_size": 4096, 00:19:18.477 "runtime": 10.037599, 00:19:18.477 "iops": 5409.859469381074, 00:19:18.477 "mibps": 21.13226355226982, 00:19:18.477 "io_failed": 0, 00:19:18.477 "io_timeout": 0, 00:19:18.477 "avg_latency_us": 23605.322911831256, 00:19:18.477 "min_latency_us": 5955.227826086956, 00:19:18.477 "max_latency_us": 36928.111304347825 00:19:18.477 } 00:19:18.477 ], 00:19:18.477 "core_count": 1 00:19:18.477 } 00:19:18.477 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:18.477 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2865332 00:19:18.477 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2865332 ']' 00:19:18.477 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2865332 00:19:18.477 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:18.477 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.477 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2865332 00:19:18.477 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:18.477 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:18.477 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2865332' 00:19:18.477 killing process with pid 2865332 00:19:18.477 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2865332 00:19:18.477 Received shutdown signal, test time was about 10.000000 seconds 00:19:18.477 00:19:18.477 Latency(us) 00:19:18.477 [2024-11-19T12:11:21.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.477 [2024-11-19T12:11:21.854Z] =================================================================================================================== 00:19:18.477 [2024-11-19T12:11:21.854Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:18.477 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2865332 00:19:18.477 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2865084 00:19:18.477 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2865084 ']' 00:19:18.477 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2865084 00:19:18.477 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:18.477 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.477 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2865084 00:19:18.477 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:18.477 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:18.477 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2865084' 00:19:18.477 killing process with pid 2865084 00:19:18.477 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2865084 00:19:18.477 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2865084 00:19:18.736 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:18.736 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:18.736 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:18.737 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.737 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2867172 00:19:18.737 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:18.737 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2867172 00:19:18.737 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2867172 ']' 00:19:18.737 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.737 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.737 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.737 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.737 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.737 [2024-11-19 13:11:21.961309] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:18.737 [2024-11-19 13:11:21.961359] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.737 [2024-11-19 13:11:22.037667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.737 [2024-11-19 13:11:22.076542] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.737 [2024-11-19 13:11:22.076579] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.737 [2024-11-19 13:11:22.076587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.737 [2024-11-19 13:11:22.076594] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.737 [2024-11-19 13:11:22.076598] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.737 [2024-11-19 13:11:22.077152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.996 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.996 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:18.996 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:18.996 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:18.996 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.996 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.996 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.H3r2RAXB84 00:19:18.996 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.H3r2RAXB84 00:19:18.996 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:19.255 [2024-11-19 13:11:22.392969] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:19.255 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:19.514 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:19.514 [2024-11-19 13:11:22.793993] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:19.514 [2024-11-19 13:11:22.794192] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.514 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:19.773 malloc0 00:19:19.773 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:20.032 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.H3r2RAXB84 00:19:20.291 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:20.291 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2867433 00:19:20.291 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:20.291 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2867433 /var/tmp/bdevperf.sock 00:19:20.291 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2867433 ']' 00:19:20.291 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:20.291 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:20.291 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.291 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:20.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:20.291 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.291 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.551 [2024-11-19 13:11:23.677426] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:20.551 [2024-11-19 13:11:23.677480] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2867433 ] 00:19:20.551 [2024-11-19 13:11:23.755795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.551 [2024-11-19 13:11:23.797237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.551 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.551 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:20.551 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.H3r2RAXB84 00:19:20.811 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:21.070 [2024-11-19 13:11:24.257379] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:21.070 nvme0n1 00:19:21.070 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:21.070 Running I/O for 1 seconds... 00:19:22.478 5383.00 IOPS, 21.03 MiB/s 00:19:22.478 Latency(us) 00:19:22.478 [2024-11-19T12:11:25.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.478 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:22.478 Verification LBA range: start 0x0 length 0x2000 00:19:22.478 nvme0n1 : 1.02 5408.97 21.13 0.00 0.00 23456.95 6553.60 20971.52 00:19:22.478 [2024-11-19T12:11:25.855Z] =================================================================================================================== 00:19:22.478 [2024-11-19T12:11:25.855Z] Total : 5408.97 21.13 0.00 0.00 23456.95 6553.60 20971.52 00:19:22.478 { 00:19:22.478 "results": [ 00:19:22.478 { 00:19:22.478 "job": "nvme0n1", 00:19:22.478 "core_mask": "0x2", 00:19:22.478 "workload": "verify", 00:19:22.478 "status": "finished", 00:19:22.478 "verify_range": { 00:19:22.478 "start": 0, 00:19:22.478 "length": 8192 00:19:22.478 }, 00:19:22.478 "queue_depth": 128, 00:19:22.478 "io_size": 4096, 00:19:22.478 "runtime": 1.018864, 00:19:22.478 "iops": 5408.965278977371, 00:19:22.478 "mibps": 21.128770621005355, 00:19:22.478 "io_failed": 0, 00:19:22.478 "io_timeout": 0, 00:19:22.478 "avg_latency_us": 23456.948227813147, 00:19:22.478 "min_latency_us": 6553.6, 00:19:22.478 "max_latency_us": 20971.52 00:19:22.478 } 00:19:22.478 ], 00:19:22.478 "core_count": 1 00:19:22.478 } 00:19:22.478 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2867433 00:19:22.478 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2867433 ']' 00:19:22.478 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2867433 00:19:22.478 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:22.478 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.478 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2867433 00:19:22.478 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:22.478 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:22.478 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2867433' 00:19:22.478 killing process with pid 2867433 00:19:22.478 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2867433 00:19:22.478 Received shutdown signal, test time was about 1.000000 seconds 00:19:22.478 00:19:22.478 Latency(us) 00:19:22.478 [2024-11-19T12:11:25.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.478 [2024-11-19T12:11:25.855Z] =================================================================================================================== 00:19:22.478 [2024-11-19T12:11:25.855Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:22.478 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2867433 00:19:22.478 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2867172 00:19:22.478 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2867172 ']' 00:19:22.478 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2867172 00:19:22.478 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:22.478 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.478 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2867172 00:19:22.478 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:22.478 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:22.478 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2867172' 00:19:22.478 killing process with pid 2867172 00:19:22.478 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2867172 00:19:22.478 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2867172 00:19:22.738 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:22.738 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:22.738 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:22.738 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.738 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2867902 00:19:22.738 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2867902 00:19:22.738 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:22.738 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2867902 ']' 00:19:22.738 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.738 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.738 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.738 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.738 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.738 [2024-11-19 13:11:25.978995] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:22.738 [2024-11-19 13:11:25.979043] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.738 [2024-11-19 13:11:26.057432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.738 [2024-11-19 13:11:26.093111] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.738 [2024-11-19 13:11:26.093144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.738 [2024-11-19 13:11:26.093152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.738 [2024-11-19 13:11:26.093158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.738 [2024-11-19 13:11:26.093162] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.738 [2024-11-19 13:11:26.093756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.997 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.997 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:22.997 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:22.997 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:22.997 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.997 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.997 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:22.998 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.998 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.998 [2024-11-19 13:11:26.237350] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.998 malloc0 00:19:22.998 [2024-11-19 13:11:26.265483] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:22.998 [2024-11-19 13:11:26.265673] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.998 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.998 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2867930 00:19:22.998 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2867930 /var/tmp/bdevperf.sock 00:19:22.998 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:22.998 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2867930 ']' 00:19:22.998 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:22.998 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.998 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:22.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:22.998 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.998 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.998 [2024-11-19 13:11:26.341172] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:22.998 [2024-11-19 13:11:26.341211] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2867930 ] 00:19:23.257 [2024-11-19 13:11:26.415582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.257 [2024-11-19 13:11:26.456494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.257 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.257 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:23.257 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.H3r2RAXB84 00:19:23.516 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:23.775 [2024-11-19 13:11:26.912512] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:23.775 nvme0n1 00:19:23.775 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:23.775 Running I/O for 1 seconds... 00:19:25.153 5049.00 IOPS, 19.72 MiB/s 00:19:25.153 Latency(us) 00:19:25.153 [2024-11-19T12:11:28.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.153 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:25.153 Verification LBA range: start 0x0 length 0x2000 00:19:25.153 nvme0n1 : 1.02 5079.25 19.84 0.00 0.00 25005.35 5043.42 24846.69 00:19:25.153 [2024-11-19T12:11:28.530Z] =================================================================================================================== 00:19:25.153 [2024-11-19T12:11:28.530Z] Total : 5079.25 19.84 0.00 0.00 25005.35 5043.42 24846.69 00:19:25.153 { 00:19:25.153 "results": [ 00:19:25.153 { 00:19:25.153 "job": "nvme0n1", 00:19:25.153 "core_mask": "0x2", 00:19:25.153 "workload": "verify", 00:19:25.153 "status": "finished", 00:19:25.153 "verify_range": { 00:19:25.153 "start": 0, 00:19:25.153 "length": 8192 00:19:25.153 }, 00:19:25.153 "queue_depth": 128, 00:19:25.153 "io_size": 4096, 00:19:25.153 "runtime": 1.019244, 00:19:25.153 "iops": 5079.2548202393145, 00:19:25.153 "mibps": 19.840839141559822, 00:19:25.153 "io_failed": 0, 00:19:25.153 "io_timeout": 0, 00:19:25.153 "avg_latency_us": 25005.348835568693, 00:19:25.153 "min_latency_us": 5043.422608695652, 00:19:25.153 "max_latency_us": 24846.692173913045 00:19:25.153 } 00:19:25.153 ], 00:19:25.153 "core_count": 1 00:19:25.153 } 00:19:25.153 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:25.153 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.153 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.153 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.153 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:25.153 "subsystems": [ 00:19:25.153 { 00:19:25.153 "subsystem": "keyring", 00:19:25.153 "config": [ 00:19:25.153 { 00:19:25.153 "method": "keyring_file_add_key", 00:19:25.153 "params": { 00:19:25.153 "name": "key0", 00:19:25.153 "path": "/tmp/tmp.H3r2RAXB84" 00:19:25.153 } 00:19:25.153 } 00:19:25.153 ] 00:19:25.153 }, 00:19:25.153 { 00:19:25.153 "subsystem": "iobuf", 00:19:25.153 "config": [ 00:19:25.153 { 00:19:25.153 "method": "iobuf_set_options", 00:19:25.153 "params": { 00:19:25.153 "small_pool_count": 8192, 00:19:25.153 "large_pool_count": 1024, 00:19:25.153 "small_bufsize": 8192, 00:19:25.153 "large_bufsize": 135168, 00:19:25.153 "enable_numa": false 00:19:25.153 } 00:19:25.153 } 00:19:25.153 ] 00:19:25.153 }, 00:19:25.153 { 00:19:25.153 "subsystem": "sock", 00:19:25.153 "config": [ 00:19:25.153 { 00:19:25.153 "method": "sock_set_default_impl", 00:19:25.153 "params": { 00:19:25.153 "impl_name": "posix" 00:19:25.153 } 00:19:25.153 }, 00:19:25.153 { 00:19:25.153 "method": "sock_impl_set_options", 00:19:25.154 "params": { 00:19:25.154 "impl_name": "ssl", 00:19:25.154 "recv_buf_size": 4096, 00:19:25.154 "send_buf_size": 4096, 00:19:25.154 "enable_recv_pipe": true, 00:19:25.154 "enable_quickack": false, 00:19:25.154 "enable_placement_id": 0, 00:19:25.154 "enable_zerocopy_send_server": true, 00:19:25.154 "enable_zerocopy_send_client": false, 00:19:25.154 "zerocopy_threshold": 0, 00:19:25.154 "tls_version": 0, 00:19:25.154 "enable_ktls": false 00:19:25.154 } 00:19:25.154 }, 00:19:25.154 { 00:19:25.154 "method": "sock_impl_set_options", 00:19:25.154 "params": { 00:19:25.154 "impl_name": "posix", 00:19:25.154 "recv_buf_size": 2097152, 00:19:25.154 "send_buf_size": 2097152, 00:19:25.154 "enable_recv_pipe": true, 00:19:25.154 "enable_quickack": false, 00:19:25.154 "enable_placement_id": 0, 00:19:25.154 "enable_zerocopy_send_server": true, 00:19:25.154 "enable_zerocopy_send_client": false, 00:19:25.154 "zerocopy_threshold": 0, 00:19:25.154 "tls_version": 0, 00:19:25.154 "enable_ktls": false 00:19:25.154 } 00:19:25.154 } 00:19:25.154 ] 00:19:25.154 }, 00:19:25.154 { 00:19:25.154 "subsystem": "vmd", 00:19:25.154 "config": [] 00:19:25.154 }, 00:19:25.154 { 00:19:25.154 "subsystem": "accel", 00:19:25.154 "config": [ 00:19:25.154 { 00:19:25.154 "method": "accel_set_options", 00:19:25.154 "params": { 00:19:25.154 "small_cache_size": 128, 00:19:25.154 "large_cache_size": 16, 00:19:25.154 "task_count": 2048, 00:19:25.154 "sequence_count": 2048, 00:19:25.154 "buf_count": 2048 00:19:25.154 } 00:19:25.154 } 00:19:25.154 ] 00:19:25.154 }, 00:19:25.154 { 00:19:25.154 "subsystem": "bdev", 00:19:25.154 "config": [ 00:19:25.154 { 00:19:25.154 "method": "bdev_set_options", 00:19:25.154 "params": { 00:19:25.154 "bdev_io_pool_size": 65535, 00:19:25.154 "bdev_io_cache_size": 256, 00:19:25.154 "bdev_auto_examine": true, 00:19:25.154 "iobuf_small_cache_size": 128, 00:19:25.154 "iobuf_large_cache_size": 16 00:19:25.154 } 00:19:25.154 }, 00:19:25.154 { 00:19:25.154 "method": "bdev_raid_set_options", 00:19:25.154 "params": { 00:19:25.154 "process_window_size_kb": 1024, 00:19:25.154 "process_max_bandwidth_mb_sec": 0 00:19:25.154 } 00:19:25.154 }, 00:19:25.154 { 00:19:25.154 "method": "bdev_iscsi_set_options", 00:19:25.154 "params": { 00:19:25.154 "timeout_sec": 30 00:19:25.154 } 00:19:25.154 }, 00:19:25.154 { 00:19:25.154 "method": "bdev_nvme_set_options", 00:19:25.154 "params": { 00:19:25.154 "action_on_timeout": "none", 00:19:25.154 "timeout_us": 0, 00:19:25.154 "timeout_admin_us": 0, 00:19:25.154 "keep_alive_timeout_ms": 10000, 00:19:25.154 "arbitration_burst": 0, 00:19:25.154 "low_priority_weight": 0, 00:19:25.154 "medium_priority_weight": 0, 00:19:25.154 "high_priority_weight": 0, 00:19:25.154 "nvme_adminq_poll_period_us": 10000, 00:19:25.154 "nvme_ioq_poll_period_us": 0, 00:19:25.154 "io_queue_requests": 0, 00:19:25.154 "delay_cmd_submit": true, 00:19:25.154 "transport_retry_count": 4, 00:19:25.154 "bdev_retry_count": 3, 00:19:25.154 "transport_ack_timeout": 0, 00:19:25.154 "ctrlr_loss_timeout_sec": 0, 00:19:25.154 "reconnect_delay_sec": 0, 00:19:25.154 "fast_io_fail_timeout_sec": 0, 00:19:25.154 "disable_auto_failback": false, 00:19:25.154 "generate_uuids": false, 00:19:25.154 "transport_tos": 0, 00:19:25.154 "nvme_error_stat": false, 00:19:25.154 "rdma_srq_size": 0, 00:19:25.154 "io_path_stat": false, 00:19:25.154 "allow_accel_sequence": false, 00:19:25.154 "rdma_max_cq_size": 0, 00:19:25.154 "rdma_cm_event_timeout_ms": 0, 00:19:25.154 "dhchap_digests": [ 00:19:25.154 "sha256", 00:19:25.154 "sha384", 00:19:25.154 "sha512" 00:19:25.154 ], 00:19:25.154 "dhchap_dhgroups": [ 00:19:25.154 "null", 00:19:25.154 "ffdhe2048", 00:19:25.154 "ffdhe3072", 00:19:25.154 "ffdhe4096", 00:19:25.154 "ffdhe6144", 00:19:25.154 "ffdhe8192" 00:19:25.154 ] 00:19:25.154 } 00:19:25.154 }, 00:19:25.154 { 00:19:25.154 "method": "bdev_nvme_set_hotplug", 00:19:25.154 "params": { 00:19:25.154 "period_us": 100000, 00:19:25.154 "enable": false 00:19:25.154 } 00:19:25.154 }, 00:19:25.154 { 00:19:25.154 "method": "bdev_malloc_create", 00:19:25.154 "params": { 00:19:25.154 "name": "malloc0", 00:19:25.154 "num_blocks": 8192, 00:19:25.154 "block_size": 4096, 00:19:25.154 "physical_block_size": 4096, 00:19:25.154 "uuid": "69408132-950d-4bc9-bbf6-1f49b36b7d8f", 00:19:25.154 "optimal_io_boundary": 0, 00:19:25.154 "md_size": 0, 00:19:25.154 "dif_type": 0, 00:19:25.154 "dif_is_head_of_md": false, 00:19:25.154 "dif_pi_format": 0 00:19:25.154 } 00:19:25.154 }, 00:19:25.154 { 00:19:25.154 "method": "bdev_wait_for_examine" 00:19:25.154 } 00:19:25.154 ] 00:19:25.154 }, 00:19:25.154 { 00:19:25.154 "subsystem": "nbd", 00:19:25.154 "config": [] 00:19:25.154 }, 00:19:25.154 { 00:19:25.154 "subsystem": "scheduler", 00:19:25.154 "config": [ 00:19:25.154 { 00:19:25.154 "method": "framework_set_scheduler", 00:19:25.154 "params": { 00:19:25.154 "name": "static" 00:19:25.154 } 00:19:25.154 } 00:19:25.154 ] 00:19:25.154 }, 00:19:25.154 { 00:19:25.154 "subsystem": "nvmf", 00:19:25.154 "config": [ 00:19:25.154 { 00:19:25.154 "method": "nvmf_set_config", 00:19:25.154 "params": { 00:19:25.154 "discovery_filter": "match_any", 00:19:25.154 "admin_cmd_passthru": { 00:19:25.154 "identify_ctrlr": false 00:19:25.154 }, 00:19:25.154 "dhchap_digests": [ 00:19:25.154 "sha256", 00:19:25.154 "sha384", 00:19:25.154 "sha512" 00:19:25.154 ], 00:19:25.154 "dhchap_dhgroups": [ 00:19:25.154 "null", 00:19:25.154 "ffdhe2048", 00:19:25.154 "ffdhe3072", 00:19:25.154 "ffdhe4096", 00:19:25.154 "ffdhe6144", 00:19:25.154 "ffdhe8192" 00:19:25.154 ] 00:19:25.154 } 00:19:25.154 }, 00:19:25.154 { 00:19:25.154 "method": "nvmf_set_max_subsystems", 00:19:25.154 "params": { 00:19:25.154 "max_subsystems": 1024 00:19:25.154 } 00:19:25.154 }, 00:19:25.154 { 00:19:25.154 "method": "nvmf_set_crdt", 00:19:25.154 "params": { 00:19:25.154 "crdt1": 0, 00:19:25.154 "crdt2": 0, 00:19:25.154 "crdt3": 0 00:19:25.154 } 00:19:25.154 }, 00:19:25.154 { 00:19:25.154 "method": "nvmf_create_transport", 00:19:25.154 "params": { 00:19:25.154 "trtype": "TCP", 00:19:25.154 "max_queue_depth": 128, 00:19:25.154 "max_io_qpairs_per_ctrlr": 127, 00:19:25.154 "in_capsule_data_size": 4096, 00:19:25.154 "max_io_size": 131072, 00:19:25.154 "io_unit_size": 131072, 00:19:25.154 "max_aq_depth": 128, 00:19:25.154 "num_shared_buffers": 511, 00:19:25.154 "buf_cache_size": 4294967295, 00:19:25.154 "dif_insert_or_strip": false, 00:19:25.154 "zcopy": false, 00:19:25.154 "c2h_success": false, 00:19:25.154 "sock_priority": 0, 00:19:25.154 "abort_timeout_sec": 1, 00:19:25.154 "ack_timeout": 0, 00:19:25.154 "data_wr_pool_size": 0 00:19:25.154 } 00:19:25.154 }, 00:19:25.154 { 00:19:25.154 "method": "nvmf_create_subsystem", 00:19:25.154 "params": { 00:19:25.154 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.154 "allow_any_host": false, 00:19:25.154 "serial_number": "00000000000000000000", 00:19:25.154 "model_number": "SPDK bdev Controller", 00:19:25.154 "max_namespaces": 32, 00:19:25.154 "min_cntlid": 1, 00:19:25.154 "max_cntlid": 65519, 00:19:25.154 "ana_reporting": false 00:19:25.154 } 00:19:25.154 }, 00:19:25.154 { 00:19:25.154 "method": "nvmf_subsystem_add_host", 00:19:25.154 "params": { 00:19:25.154 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.154 "host": "nqn.2016-06.io.spdk:host1", 00:19:25.154 "psk": "key0" 00:19:25.154 } 00:19:25.154 }, 00:19:25.154 { 00:19:25.154 "method": "nvmf_subsystem_add_ns", 00:19:25.154 "params": { 00:19:25.154 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.154 "namespace": { 00:19:25.154 "nsid": 1, 00:19:25.154 "bdev_name": "malloc0", 00:19:25.154 "nguid": "69408132950D4BC9BBF61F49B36B7D8F", 00:19:25.154 "uuid": "69408132-950d-4bc9-bbf6-1f49b36b7d8f", 00:19:25.154 "no_auto_visible": false 00:19:25.154 } 00:19:25.154 } 00:19:25.154 }, 00:19:25.154 { 00:19:25.154 "method": "nvmf_subsystem_add_listener", 00:19:25.154 "params": { 00:19:25.154 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.154 "listen_address": { 00:19:25.154 "trtype": "TCP", 00:19:25.154 "adrfam": "IPv4", 00:19:25.154 "traddr": "10.0.0.2", 00:19:25.154 "trsvcid": "4420" 00:19:25.154 }, 00:19:25.154 "secure_channel": false, 00:19:25.154 "sock_impl": "ssl" 00:19:25.154 } 00:19:25.154 } 00:19:25.154 ] 00:19:25.154 } 00:19:25.154 ] 00:19:25.154 }' 00:19:25.154 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:25.154 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:25.154 "subsystems": [ 00:19:25.154 { 00:19:25.155 "subsystem": "keyring", 00:19:25.155 "config": [ 00:19:25.155 { 00:19:25.155 "method": "keyring_file_add_key", 00:19:25.155 "params": { 00:19:25.155 "name": "key0", 00:19:25.155 "path": "/tmp/tmp.H3r2RAXB84" 00:19:25.155 } 00:19:25.155 } 00:19:25.155 ] 00:19:25.155 }, 00:19:25.155 { 00:19:25.155 "subsystem": "iobuf", 00:19:25.155 "config": [ 00:19:25.155 { 00:19:25.155 "method": "iobuf_set_options", 00:19:25.155 "params": { 00:19:25.155 "small_pool_count": 8192, 00:19:25.155 "large_pool_count": 1024, 00:19:25.155 "small_bufsize": 8192, 00:19:25.155 "large_bufsize": 135168, 00:19:25.155 "enable_numa": false 00:19:25.155 } 00:19:25.155 } 00:19:25.155 ] 00:19:25.155 }, 00:19:25.155 { 00:19:25.155 "subsystem": "sock", 00:19:25.155 "config": [ 00:19:25.155 { 00:19:25.155 "method": "sock_set_default_impl", 00:19:25.155 "params": { 00:19:25.155 "impl_name": "posix" 00:19:25.155 } 00:19:25.155 }, 00:19:25.155 { 00:19:25.155 "method": "sock_impl_set_options", 00:19:25.155 "params": { 00:19:25.155 "impl_name": "ssl", 00:19:25.155 "recv_buf_size": 4096, 00:19:25.155 "send_buf_size": 4096, 00:19:25.155 "enable_recv_pipe": true, 00:19:25.155 "enable_quickack": false, 00:19:25.155 "enable_placement_id": 0, 00:19:25.155 "enable_zerocopy_send_server": true, 00:19:25.155 "enable_zerocopy_send_client": false, 00:19:25.155 "zerocopy_threshold": 0, 00:19:25.155 "tls_version": 0, 00:19:25.155 "enable_ktls": false 00:19:25.155 } 00:19:25.155 }, 00:19:25.155 { 00:19:25.155 "method": "sock_impl_set_options", 00:19:25.155 "params": { 00:19:25.155 "impl_name": "posix", 00:19:25.155 "recv_buf_size": 2097152, 00:19:25.155 "send_buf_size": 2097152, 00:19:25.155 "enable_recv_pipe": true, 00:19:25.155 "enable_quickack": false, 00:19:25.155 "enable_placement_id": 0, 00:19:25.155 "enable_zerocopy_send_server": true, 00:19:25.155 "enable_zerocopy_send_client": false, 00:19:25.155 "zerocopy_threshold": 0, 00:19:25.155 "tls_version": 0, 00:19:25.155 "enable_ktls": false 00:19:25.155 } 00:19:25.155 } 00:19:25.155 ] 00:19:25.155 }, 00:19:25.155 { 00:19:25.155 "subsystem": "vmd", 00:19:25.155 "config": [] 00:19:25.155 }, 00:19:25.155 { 00:19:25.155 "subsystem": "accel", 00:19:25.155 "config": [ 00:19:25.155 { 00:19:25.155 "method": "accel_set_options", 00:19:25.155 "params": { 00:19:25.155 "small_cache_size": 128, 00:19:25.155 "large_cache_size": 16, 00:19:25.155 "task_count": 2048, 00:19:25.155 "sequence_count": 2048, 00:19:25.155 "buf_count": 2048 00:19:25.155 } 00:19:25.155 } 00:19:25.155 ] 00:19:25.155 }, 00:19:25.155 { 00:19:25.155 "subsystem": "bdev", 00:19:25.155 "config": [ 00:19:25.155 { 00:19:25.155 "method": "bdev_set_options", 00:19:25.155 "params": { 00:19:25.155 "bdev_io_pool_size": 65535, 00:19:25.155 "bdev_io_cache_size": 256, 00:19:25.155 "bdev_auto_examine": true, 00:19:25.155 "iobuf_small_cache_size": 128, 00:19:25.155 "iobuf_large_cache_size": 16 00:19:25.155 } 00:19:25.155 }, 00:19:25.155 { 00:19:25.155 "method": "bdev_raid_set_options", 00:19:25.155 "params": { 00:19:25.155 "process_window_size_kb": 1024, 00:19:25.155 "process_max_bandwidth_mb_sec": 0 00:19:25.155 } 00:19:25.155 }, 00:19:25.155 { 00:19:25.155 "method": "bdev_iscsi_set_options", 00:19:25.155 "params": { 00:19:25.155 "timeout_sec": 30 00:19:25.155 } 00:19:25.155 }, 00:19:25.155 { 00:19:25.155 "method": "bdev_nvme_set_options", 00:19:25.155 "params": { 00:19:25.155 "action_on_timeout": "none", 00:19:25.155 "timeout_us": 0, 00:19:25.155 "timeout_admin_us": 0, 00:19:25.155 "keep_alive_timeout_ms": 10000, 00:19:25.155 "arbitration_burst": 0, 00:19:25.155 "low_priority_weight": 0, 00:19:25.155 "medium_priority_weight": 0, 00:19:25.155 "high_priority_weight": 0, 00:19:25.155 "nvme_adminq_poll_period_us": 10000, 00:19:25.155 "nvme_ioq_poll_period_us": 0, 00:19:25.155 "io_queue_requests": 512, 00:19:25.155 "delay_cmd_submit": true, 00:19:25.155 "transport_retry_count": 4, 00:19:25.155 "bdev_retry_count": 3, 00:19:25.155 "transport_ack_timeout": 0, 00:19:25.155 "ctrlr_loss_timeout_sec": 0, 00:19:25.155 "reconnect_delay_sec": 0, 00:19:25.155 "fast_io_fail_timeout_sec": 0, 00:19:25.155 "disable_auto_failback": false, 00:19:25.155 "generate_uuids": false, 00:19:25.155 "transport_tos": 0, 00:19:25.155 "nvme_error_stat": false, 00:19:25.155 "rdma_srq_size": 0, 00:19:25.155 "io_path_stat": false, 00:19:25.155 "allow_accel_sequence": false, 00:19:25.155 "rdma_max_cq_size": 0, 00:19:25.155 "rdma_cm_event_timeout_ms": 0, 00:19:25.155 "dhchap_digests": [ 00:19:25.155 "sha256", 00:19:25.155 "sha384", 00:19:25.155 "sha512" 00:19:25.155 ], 00:19:25.155 "dhchap_dhgroups": [ 00:19:25.155 "null", 00:19:25.155 "ffdhe2048", 00:19:25.155 "ffdhe3072", 00:19:25.155 "ffdhe4096", 00:19:25.155 "ffdhe6144", 00:19:25.155 "ffdhe8192" 00:19:25.155 ] 00:19:25.155 } 00:19:25.155 }, 00:19:25.155 { 00:19:25.155 "method": "bdev_nvme_attach_controller", 00:19:25.155 "params": { 00:19:25.155 "name": "nvme0", 00:19:25.155 "trtype": "TCP", 00:19:25.155 "adrfam": "IPv4", 00:19:25.155 "traddr": "10.0.0.2", 00:19:25.155 "trsvcid": "4420", 00:19:25.155 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.155 "prchk_reftag": false, 00:19:25.155 "prchk_guard": false, 00:19:25.155 "ctrlr_loss_timeout_sec": 0, 00:19:25.155 "reconnect_delay_sec": 0, 00:19:25.155 "fast_io_fail_timeout_sec": 0, 00:19:25.155 "psk": "key0", 00:19:25.155 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:25.155 "hdgst": false, 00:19:25.155 "ddgst": false, 00:19:25.155 "multipath": "multipath" 00:19:25.155 } 00:19:25.155 }, 00:19:25.155 { 00:19:25.155 "method": "bdev_nvme_set_hotplug", 00:19:25.155 "params": { 00:19:25.155 "period_us": 100000, 00:19:25.155 "enable": false 00:19:25.155 } 00:19:25.155 }, 00:19:25.155 { 00:19:25.155 "method": "bdev_enable_histogram", 00:19:25.155 "params": { 00:19:25.155 "name": "nvme0n1", 00:19:25.155 "enable": true 00:19:25.155 } 00:19:25.155 }, 00:19:25.155 { 00:19:25.155 "method": "bdev_wait_for_examine" 00:19:25.155 } 00:19:25.155 ] 00:19:25.155 }, 00:19:25.155 { 00:19:25.155 "subsystem": "nbd", 00:19:25.155 "config": [] 00:19:25.155 } 00:19:25.155 ] 00:19:25.155 }' 00:19:25.155 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2867930 00:19:25.155 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2867930 ']' 00:19:25.155 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2867930 00:19:25.155 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:25.155 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.155 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2867930 00:19:25.415 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:25.415 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:25.415 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2867930' 00:19:25.415 killing process with pid 2867930 00:19:25.415 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2867930 00:19:25.415 Received shutdown signal, test time was about 1.000000 seconds 00:19:25.415 00:19:25.415 Latency(us) 00:19:25.415 [2024-11-19T12:11:28.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.415 [2024-11-19T12:11:28.792Z] =================================================================================================================== 00:19:25.415 [2024-11-19T12:11:28.792Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:25.415 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2867930 00:19:25.415 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2867902 00:19:25.415 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2867902 ']' 00:19:25.415 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2867902 00:19:25.415 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:25.415 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.415 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2867902 00:19:25.415 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:25.415 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:25.415 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2867902' 00:19:25.415 killing process with pid 2867902 00:19:25.415 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2867902 00:19:25.415 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2867902 00:19:25.675 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:25.675 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:25.675 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:25.675 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:25.675 "subsystems": [ 00:19:25.675 { 00:19:25.675 "subsystem": "keyring", 00:19:25.675 "config": [ 00:19:25.675 { 00:19:25.675 "method": "keyring_file_add_key", 00:19:25.675 "params": { 00:19:25.675 "name": "key0", 00:19:25.675 "path": "/tmp/tmp.H3r2RAXB84" 00:19:25.675 } 00:19:25.675 } 00:19:25.675 ] 00:19:25.675 }, 00:19:25.675 { 00:19:25.675 "subsystem": "iobuf", 00:19:25.675 "config": [ 00:19:25.675 { 00:19:25.675 "method": "iobuf_set_options", 00:19:25.675 "params": { 00:19:25.675 "small_pool_count": 8192, 00:19:25.675 "large_pool_count": 1024, 00:19:25.675 "small_bufsize": 8192, 00:19:25.675 "large_bufsize": 135168, 00:19:25.675 "enable_numa": false 00:19:25.675 } 00:19:25.675 } 00:19:25.675 ] 00:19:25.675 }, 00:19:25.675 { 00:19:25.675 "subsystem": "sock", 00:19:25.675 "config": [ 00:19:25.675 { 00:19:25.675 "method": "sock_set_default_impl", 00:19:25.675 "params": { 00:19:25.675 "impl_name": "posix" 00:19:25.675 } 00:19:25.675 }, 00:19:25.675 { 00:19:25.675 "method": "sock_impl_set_options", 00:19:25.675 "params": { 00:19:25.675 "impl_name": "ssl", 00:19:25.675 "recv_buf_size": 4096, 00:19:25.675 "send_buf_size": 4096, 00:19:25.675 "enable_recv_pipe": true, 00:19:25.675 "enable_quickack": false, 00:19:25.675 "enable_placement_id": 0, 00:19:25.675 "enable_zerocopy_send_server": true, 00:19:25.675 "enable_zerocopy_send_client": false, 00:19:25.675 "zerocopy_threshold": 0, 00:19:25.675 "tls_version": 0, 00:19:25.675 "enable_ktls": false 00:19:25.675 } 00:19:25.675 }, 00:19:25.675 { 00:19:25.675 "method": "sock_impl_set_options", 00:19:25.675 "params": { 00:19:25.675 "impl_name": "posix", 00:19:25.675 "recv_buf_size": 2097152, 00:19:25.675 "send_buf_size": 2097152, 00:19:25.675 "enable_recv_pipe": true, 00:19:25.675 "enable_quickack": false, 00:19:25.675 "enable_placement_id": 0, 00:19:25.675 "enable_zerocopy_send_server": true, 00:19:25.675 "enable_zerocopy_send_client": false, 00:19:25.675 "zerocopy_threshold": 0, 00:19:25.675 "tls_version": 0, 00:19:25.675 "enable_ktls": false 00:19:25.675 } 00:19:25.675 } 00:19:25.675 ] 00:19:25.675 }, 00:19:25.675 { 00:19:25.675 "subsystem": "vmd", 00:19:25.675 "config": [] 00:19:25.675 }, 00:19:25.675 { 00:19:25.675 "subsystem": "accel", 00:19:25.675 "config": [ 00:19:25.675 { 00:19:25.675 "method": "accel_set_options", 00:19:25.675 "params": { 00:19:25.675 "small_cache_size": 128, 00:19:25.675 "large_cache_size": 16, 00:19:25.675 "task_count": 2048, 00:19:25.675 "sequence_count": 2048, 00:19:25.675 "buf_count": 2048 00:19:25.675 } 00:19:25.675 } 00:19:25.675 ] 00:19:25.675 }, 00:19:25.675 { 00:19:25.675 "subsystem": "bdev", 00:19:25.675 "config": [ 00:19:25.675 { 00:19:25.675 "method": "bdev_set_options", 00:19:25.675 "params": { 00:19:25.675 "bdev_io_pool_size": 65535, 00:19:25.675 "bdev_io_cache_size": 256, 00:19:25.675 "bdev_auto_examine": true, 00:19:25.675 "iobuf_small_cache_size": 128, 00:19:25.675 "iobuf_large_cache_size": 16 00:19:25.675 } 00:19:25.675 }, 00:19:25.675 { 00:19:25.675 "method": "bdev_raid_set_options", 00:19:25.675 "params": { 00:19:25.675 "process_window_size_kb": 1024, 00:19:25.675 "process_max_bandwidth_mb_sec": 0 00:19:25.675 } 00:19:25.675 }, 00:19:25.675 { 00:19:25.675 "method": "bdev_iscsi_set_options", 00:19:25.675 "params": { 00:19:25.675 "timeout_sec": 30 00:19:25.675 } 00:19:25.675 }, 00:19:25.675 { 00:19:25.675 "method": "bdev_nvme_set_options", 00:19:25.675 "params": { 00:19:25.675 "action_on_timeout": "none", 00:19:25.675 "timeout_us": 0, 00:19:25.675 "timeout_admin_us": 0, 00:19:25.675 "keep_alive_timeout_ms": 10000, 00:19:25.675 "arbitration_burst": 0, 00:19:25.675 "low_priority_weight": 0, 00:19:25.675 "medium_priority_weight": 0, 00:19:25.675 "high_priority_weight": 0, 00:19:25.675 "nvme_adminq_poll_period_us": 10000, 00:19:25.675 "nvme_ioq_poll_period_us": 0, 00:19:25.675 "io_queue_requests": 0, 00:19:25.675 "delay_cmd_submit": true, 00:19:25.675 "transport_retry_count": 4, 00:19:25.675 "bdev_retry_count": 3, 00:19:25.675 "transport_ack_timeout": 0, 00:19:25.675 "ctrlr_loss_timeout_sec": 0, 00:19:25.676 "reconnect_delay_sec": 0, 00:19:25.676 "fast_io_fail_timeout_sec": 0, 00:19:25.676 "disable_auto_failback": false, 00:19:25.676 "generate_uuids": false, 00:19:25.676 "transport_tos": 0, 00:19:25.676 "nvme_error_stat": false, 00:19:25.676 "rdma_srq_size": 0, 00:19:25.676 "io_path_stat": false, 00:19:25.676 "allow_accel_sequence": false, 00:19:25.676 "rdma_max_cq_size": 0, 00:19:25.676 "rdma_cm_event_timeout_ms": 0, 00:19:25.676 "dhchap_digests": [ 00:19:25.676 "sha256", 00:19:25.676 "sha384", 00:19:25.676 "sha512" 00:19:25.676 ], 00:19:25.676 "dhchap_dhgroups": [ 00:19:25.676 "null", 00:19:25.676 "ffdhe2048", 00:19:25.676 "ffdhe3072", 00:19:25.676 "ffdhe4096", 00:19:25.676 "ffdhe6144", 00:19:25.676 "ffdhe8192" 00:19:25.676 ] 00:19:25.676 } 00:19:25.676 }, 00:19:25.676 { 00:19:25.676 "method": "bdev_nvme_set_hotplug", 00:19:25.676 "params": { 00:19:25.676 "period_us": 100000, 00:19:25.676 "enable": false 00:19:25.676 } 00:19:25.676 }, 00:19:25.676 { 00:19:25.676 "method": "bdev_malloc_create", 00:19:25.676 "params": { 00:19:25.676 "name": "malloc0", 00:19:25.676 "num_blocks": 8192, 00:19:25.676 "block_size": 4096, 00:19:25.676 "physical_block_size": 4096, 00:19:25.676 "uuid": "69408132-950d-4bc9-bbf6-1f49b36b7d8f", 00:19:25.676 "optimal_io_boundary": 0, 00:19:25.676 "md_size": 0, 00:19:25.676 "dif_type": 0, 00:19:25.676 "dif_is_head_of_md": false, 00:19:25.676 "dif_pi_format": 0 00:19:25.676 } 00:19:25.676 }, 00:19:25.676 { 00:19:25.676 "method": "bdev_wait_for_examine" 00:19:25.676 } 00:19:25.676 ] 00:19:25.676 }, 00:19:25.676 { 00:19:25.676 "subsystem": "nbd", 00:19:25.676 "config": [] 00:19:25.676 }, 00:19:25.676 { 00:19:25.676 "subsystem": "scheduler", 00:19:25.676 "config": [ 00:19:25.676 { 00:19:25.676 "method": "framework_set_scheduler", 00:19:25.676 "params": { 00:19:25.676 "name": "static" 00:19:25.676 } 00:19:25.676 } 00:19:25.676 ] 00:19:25.676 }, 00:19:25.676 { 00:19:25.676 "subsystem": "nvmf", 00:19:25.676 "config": [ 00:19:25.676 { 00:19:25.676 "method": "nvmf_set_config", 00:19:25.676 "params": { 00:19:25.676 "discovery_filter": "match_any", 00:19:25.676 "admin_cmd_passthru": { 00:19:25.676 "identify_ctrlr": false 00:19:25.676 }, 00:19:25.676 "dhchap_digests": [ 00:19:25.676 "sha256", 00:19:25.676 "sha384", 00:19:25.676 "sha512" 00:19:25.676 ], 00:19:25.676 "dhchap_dhgroups": [ 00:19:25.676 "null", 00:19:25.676 "ffdhe2048", 00:19:25.676 "ffdhe3072", 00:19:25.676 "ffdhe4096", 00:19:25.676 "ffdhe6144", 00:19:25.676 "ffdhe8192" 00:19:25.676 ] 00:19:25.676 } 00:19:25.676 }, 00:19:25.676 { 00:19:25.676 "method": "nvmf_set_max_subsystems", 00:19:25.676 "params": { 00:19:25.676 "max_subsystems": 1024 00:19:25.676 } 00:19:25.676 }, 00:19:25.676 { 00:19:25.676 "method": "nvmf_set_crdt", 00:19:25.676 "params": { 00:19:25.676 "crdt1": 0, 00:19:25.676 "crdt2": 0, 00:19:25.676 "crdt3": 0 00:19:25.676 } 00:19:25.676 }, 00:19:25.676 { 00:19:25.676 "method": "nvmf_create_transport", 00:19:25.676 "params": { 00:19:25.676 "trtype": "TCP", 00:19:25.676 "max_queue_depth": 128, 00:19:25.676 "max_io_qpairs_per_ctrlr": 127, 00:19:25.676 "in_capsule_data_size": 4096, 00:19:25.676 "max_io_size": 131072, 00:19:25.676 "io_unit_size": 131072, 00:19:25.676 "max_aq_depth": 128, 00:19:25.676 "num_shared_buffers": 511, 00:19:25.676 "buf_cache_size": 4294967295, 00:19:25.676 "dif_insert_or_strip": false, 00:19:25.676 "zcopy": false, 00:19:25.676 "c2h_success": false, 00:19:25.676 "sock_priority": 0, 00:19:25.676 "abort_timeout_sec": 1, 00:19:25.676 "ack_timeout": 0, 00:19:25.676 "data_wr_pool_size": 0 00:19:25.676 } 00:19:25.676 }, 00:19:25.676 { 00:19:25.676 "method": "nvmf_create_subsystem", 00:19:25.676 "params": { 00:19:25.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.676 "allow_any_host": false, 00:19:25.676 "serial_number": "00000000000000000000", 00:19:25.676 "model_number": "SPDK bdev Controller", 00:19:25.676 "max_namespaces": 32, 00:19:25.676 "min_cntlid": 1, 00:19:25.676 "max_cntlid": 65519, 00:19:25.676 "ana_reporting": false 00:19:25.676 } 00:19:25.676 }, 00:19:25.676 { 00:19:25.676 "method": "nvmf_subsystem_add_host", 00:19:25.676 "params": { 00:19:25.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.676 "host": "nqn.2016-06.io.spdk:host1", 00:19:25.676 "psk": "key0" 00:19:25.676 } 00:19:25.676 }, 00:19:25.676 { 00:19:25.676 "method": "nvmf_subsystem_add_ns", 00:19:25.676 "params": { 00:19:25.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.676 "namespace": { 00:19:25.676 "nsid": 1, 00:19:25.676 "bdev_name": "malloc0", 00:19:25.676 "nguid": "69408132950D4BC9BBF61F49B36B7D8F", 00:19:25.676 "uuid": "69408132-950d-4bc9-bbf6-1f49b36b7d8f", 00:19:25.676 "no_auto_visible": false 00:19:25.676 } 00:19:25.676 } 00:19:25.676 }, 00:19:25.676 { 00:19:25.676 "method": "nvmf_subsystem_add_listener", 00:19:25.676 "params": { 00:19:25.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.676 "listen_address": { 00:19:25.676 "trtype": "TCP", 00:19:25.676 "adrfam": "IPv4", 00:19:25.676 "traddr": "10.0.0.2", 00:19:25.676 "trsvcid": "4420" 00:19:25.676 }, 00:19:25.676 "secure_channel": false, 00:19:25.676 "sock_impl": "ssl" 00:19:25.676 } 00:19:25.676 } 00:19:25.676 ] 00:19:25.676 } 00:19:25.676 ] 00:19:25.676 }' 00:19:25.676 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.676 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2868401 00:19:25.676 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:25.676 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2868401 00:19:25.676 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2868401 ']' 00:19:25.676 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.676 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.676 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.676 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.676 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.676 [2024-11-19 13:11:28.995008] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:25.676 [2024-11-19 13:11:28.995055] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.935 [2024-11-19 13:11:29.074226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.935 [2024-11-19 13:11:29.115047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.935 [2024-11-19 13:11:29.115086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.935 [2024-11-19 13:11:29.115093] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.935 [2024-11-19 13:11:29.115100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.935 [2024-11-19 13:11:29.115105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.935 [2024-11-19 13:11:29.115728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.193 [2024-11-19 13:11:29.329139] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.193 [2024-11-19 13:11:29.361171] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:26.193 [2024-11-19 13:11:29.361375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.761 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.761 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:26.761 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:26.761 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:26.761 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.761 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.761 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2868643 00:19:26.761 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2868643 /var/tmp/bdevperf.sock 00:19:26.761 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2868643 ']' 00:19:26.761 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:26.761 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:26.761 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.761 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:26.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:26.761 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:26.761 "subsystems": [ 00:19:26.761 { 00:19:26.761 "subsystem": "keyring", 00:19:26.761 "config": [ 00:19:26.761 { 00:19:26.761 "method": "keyring_file_add_key", 00:19:26.761 "params": { 00:19:26.761 "name": "key0", 00:19:26.761 "path": "/tmp/tmp.H3r2RAXB84" 00:19:26.761 } 00:19:26.761 } 00:19:26.761 ] 00:19:26.761 }, 00:19:26.761 { 00:19:26.761 "subsystem": "iobuf", 00:19:26.761 "config": [ 00:19:26.761 { 00:19:26.761 "method": "iobuf_set_options", 00:19:26.761 "params": { 00:19:26.761 "small_pool_count": 8192, 00:19:26.761 "large_pool_count": 1024, 00:19:26.761 "small_bufsize": 8192, 00:19:26.761 "large_bufsize": 135168, 00:19:26.761 "enable_numa": false 00:19:26.761 } 00:19:26.761 } 00:19:26.761 ] 00:19:26.761 }, 00:19:26.761 { 00:19:26.761 "subsystem": "sock", 00:19:26.761 "config": [ 00:19:26.761 { 00:19:26.761 "method": "sock_set_default_impl", 00:19:26.761 "params": { 00:19:26.761 "impl_name": "posix" 00:19:26.761 } 00:19:26.761 }, 00:19:26.761 { 00:19:26.761 "method": "sock_impl_set_options", 00:19:26.761 "params": { 00:19:26.761 "impl_name": "ssl", 00:19:26.761 "recv_buf_size": 4096, 00:19:26.761 "send_buf_size": 4096, 00:19:26.761 "enable_recv_pipe": true, 00:19:26.761 "enable_quickack": false, 00:19:26.761 "enable_placement_id": 0, 00:19:26.761 "enable_zerocopy_send_server": true, 00:19:26.761 "enable_zerocopy_send_client": false, 00:19:26.761 "zerocopy_threshold": 0, 00:19:26.761 "tls_version": 0, 00:19:26.761 "enable_ktls": false 00:19:26.761 } 00:19:26.761 }, 00:19:26.761 { 00:19:26.761 "method": "sock_impl_set_options", 00:19:26.761 "params": { 00:19:26.761 "impl_name": "posix", 00:19:26.761 "recv_buf_size": 2097152, 00:19:26.761 "send_buf_size": 2097152, 00:19:26.761 "enable_recv_pipe": true, 00:19:26.761 "enable_quickack": false, 00:19:26.761 "enable_placement_id": 0, 00:19:26.761 "enable_zerocopy_send_server": true, 00:19:26.761 "enable_zerocopy_send_client": false, 00:19:26.761 "zerocopy_threshold": 0, 00:19:26.761 "tls_version": 0, 00:19:26.761 "enable_ktls": false 00:19:26.761 } 00:19:26.761 } 00:19:26.761 ] 00:19:26.761 }, 00:19:26.761 { 00:19:26.761 "subsystem": "vmd", 00:19:26.761 "config": [] 00:19:26.761 }, 00:19:26.761 { 00:19:26.761 "subsystem": "accel", 00:19:26.761 "config": [ 00:19:26.761 { 00:19:26.761 "method": "accel_set_options", 00:19:26.761 "params": { 00:19:26.761 "small_cache_size": 128, 00:19:26.761 "large_cache_size": 16, 00:19:26.761 "task_count": 2048, 00:19:26.761 "sequence_count": 2048, 00:19:26.761 "buf_count": 2048 00:19:26.761 } 00:19:26.761 } 00:19:26.761 ] 00:19:26.761 }, 00:19:26.761 { 00:19:26.761 "subsystem": "bdev", 00:19:26.761 "config": [ 00:19:26.761 { 00:19:26.761 "method": "bdev_set_options", 00:19:26.761 "params": { 00:19:26.761 "bdev_io_pool_size": 65535, 00:19:26.761 "bdev_io_cache_size": 256, 00:19:26.761 "bdev_auto_examine": true, 00:19:26.761 "iobuf_small_cache_size": 128, 00:19:26.761 "iobuf_large_cache_size": 16 00:19:26.761 } 00:19:26.761 }, 00:19:26.761 { 00:19:26.761 "method": "bdev_raid_set_options", 00:19:26.761 "params": { 00:19:26.761 "process_window_size_kb": 1024, 00:19:26.761 "process_max_bandwidth_mb_sec": 0 00:19:26.761 } 00:19:26.761 }, 00:19:26.761 { 00:19:26.761 "method": "bdev_iscsi_set_options", 00:19:26.761 "params": { 00:19:26.761 "timeout_sec": 30 00:19:26.761 } 00:19:26.761 }, 00:19:26.761 { 00:19:26.761 "method": "bdev_nvme_set_options", 00:19:26.761 "params": { 00:19:26.761 "action_on_timeout": "none", 00:19:26.761 "timeout_us": 0, 00:19:26.761 "timeout_admin_us": 0, 00:19:26.762 "keep_alive_timeout_ms": 10000, 00:19:26.762 "arbitration_burst": 0, 00:19:26.762 "low_priority_weight": 0, 00:19:26.762 "medium_priority_weight": 0, 00:19:26.762 "high_priority_weight": 0, 00:19:26.762 "nvme_adminq_poll_period_us": 10000, 00:19:26.762 "nvme_ioq_poll_period_us": 0, 00:19:26.762 "io_queue_requests": 512, 00:19:26.762 "delay_cmd_submit": true, 00:19:26.762 "transport_retry_count": 4, 00:19:26.762 "bdev_retry_count": 3, 00:19:26.762 "transport_ack_timeout": 0, 00:19:26.762 "ctrlr_loss_timeout_sec": 0, 00:19:26.762 "reconnect_delay_sec": 0, 00:19:26.762 "fast_io_fail_timeout_sec": 0, 00:19:26.762 "disable_auto_failback": false, 00:19:26.762 "generate_uuids": false, 00:19:26.762 "transport_tos": 0, 00:19:26.762 "nvme_error_stat": false, 00:19:26.762 "rdma_srq_size": 0, 00:19:26.762 "io_path_stat": false, 00:19:26.762 "allow_accel_sequence": false, 00:19:26.762 "rdma_max_cq_size": 0, 00:19:26.762 "rdma_cm_event_timeout_ms": 0, 00:19:26.762 "dhchap_digests": [ 00:19:26.762 "sha256", 00:19:26.762 "sha384", 00:19:26.762 "sha512" 00:19:26.762 ], 00:19:26.762 "dhchap_dhgroups": [ 00:19:26.762 "null", 00:19:26.762 "ffdhe2048", 00:19:26.762 "ffdhe3072", 00:19:26.762 "ffdhe4096", 00:19:26.762 "ffdhe6144", 00:19:26.762 "ffdhe8192" 00:19:26.762 ] 00:19:26.762 } 00:19:26.762 }, 00:19:26.762 { 00:19:26.762 "method": "bdev_nvme_attach_controller", 00:19:26.762 "params": { 00:19:26.762 "name": "nvme0", 00:19:26.762 "trtype": "TCP", 00:19:26.762 "adrfam": "IPv4", 00:19:26.762 "traddr": "10.0.0.2", 00:19:26.762 "trsvcid": "4420", 00:19:26.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.762 "prchk_reftag": false, 00:19:26.762 "prchk_guard": false, 00:19:26.762 "ctrlr_loss_timeout_sec": 0, 00:19:26.762 "reconnect_delay_sec": 0, 00:19:26.762 "fast_io_fail_timeout_sec": 0, 00:19:26.762 "psk": "key0", 00:19:26.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:26.762 "hdgst": false, 00:19:26.762 "ddgst": false, 00:19:26.762 "multipath": "multipath" 00:19:26.762 } 00:19:26.762 }, 00:19:26.762 { 00:19:26.762 "method": "bdev_nvme_set_hotplug", 00:19:26.762 "params": { 00:19:26.762 "period_us": 100000, 00:19:26.762 "enable": false 00:19:26.762 } 00:19:26.762 }, 00:19:26.762 { 00:19:26.762 "method": "bdev_enable_histogram", 00:19:26.762 "params": { 00:19:26.762 "name": "nvme0n1", 00:19:26.762 "enable": true 00:19:26.762 } 00:19:26.762 }, 00:19:26.762 { 00:19:26.762 "method": "bdev_wait_for_examine" 00:19:26.762 } 00:19:26.762 ] 00:19:26.762 }, 00:19:26.762 { 00:19:26.762 "subsystem": "nbd", 00:19:26.762 "config": [] 00:19:26.762 } 00:19:26.762 ] 00:19:26.762 }' 00:19:26.762 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.762 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.762 [2024-11-19 13:11:29.918537] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:26.762 [2024-11-19 13:11:29.918585] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2868643 ] 00:19:26.762 [2024-11-19 13:11:29.994639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.762 [2024-11-19 13:11:30.044272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.020 [2024-11-19 13:11:30.198252] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:27.588 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.588 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:27.588 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:27.588 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:27.588 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.588 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:27.848 Running I/O for 1 seconds... 00:19:28.785 5343.00 IOPS, 20.87 MiB/s 00:19:28.785 Latency(us) 00:19:28.785 [2024-11-19T12:11:32.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.785 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:28.785 Verification LBA range: start 0x0 length 0x2000 00:19:28.785 nvme0n1 : 1.01 5404.23 21.11 0.00 0.00 23527.96 5242.88 23934.89 00:19:28.785 [2024-11-19T12:11:32.162Z] =================================================================================================================== 00:19:28.785 [2024-11-19T12:11:32.162Z] Total : 5404.23 21.11 0.00 0.00 23527.96 5242.88 23934.89 00:19:28.785 { 00:19:28.785 "results": [ 00:19:28.785 { 00:19:28.785 "job": "nvme0n1", 00:19:28.785 "core_mask": "0x2", 00:19:28.785 "workload": "verify", 00:19:28.785 "status": "finished", 00:19:28.785 "verify_range": { 00:19:28.785 "start": 0, 00:19:28.785 "length": 8192 00:19:28.785 }, 00:19:28.785 "queue_depth": 128, 00:19:28.785 "io_size": 4096, 00:19:28.785 "runtime": 1.01254, 00:19:28.785 "iops": 5404.230943962708, 00:19:28.785 "mibps": 21.110277124854328, 00:19:28.785 "io_failed": 0, 00:19:28.785 "io_timeout": 0, 00:19:28.785 "avg_latency_us": 23527.96086448004, 00:19:28.785 "min_latency_us": 5242.88, 00:19:28.785 "max_latency_us": 23934.88695652174 00:19:28.785 } 00:19:28.785 ], 00:19:28.785 "core_count": 1 00:19:28.785 } 00:19:28.786 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:28.786 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:28.786 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:28.786 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:28.786 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:28.786 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:28.786 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:28.786 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:28.786 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:28.786 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:28.786 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:28.786 nvmf_trace.0 00:19:29.045 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:29.045 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2868643 00:19:29.045 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2868643 ']' 00:19:29.045 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2868643 00:19:29.045 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:29.045 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.045 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2868643 00:19:29.045 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:29.045 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:29.045 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2868643' 00:19:29.045 killing process with pid 2868643 00:19:29.045 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2868643 00:19:29.045 Received shutdown signal, test time was about 1.000000 seconds 00:19:29.045 00:19:29.045 Latency(us) 00:19:29.045 [2024-11-19T12:11:32.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.045 [2024-11-19T12:11:32.422Z] =================================================================================================================== 00:19:29.045 [2024-11-19T12:11:32.422Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:29.045 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2868643 00:19:29.045 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:29.045 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:29.045 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:29.045 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:29.045 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:29.045 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:29.045 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:29.045 rmmod nvme_tcp 00:19:29.045 rmmod nvme_fabrics 00:19:29.304 rmmod nvme_keyring 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2868401 ']' 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2868401 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2868401 ']' 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2868401 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2868401 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2868401' 00:19:29.304 killing process with pid 2868401 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2868401 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2868401 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:29.304 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.CNYuxPAb46 /tmp/tmp.3adyqRTYYc /tmp/tmp.H3r2RAXB84 00:19:31.842 00:19:31.842 real 1m20.318s 00:19:31.842 user 2m3.595s 00:19:31.842 sys 0m30.523s 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.842 ************************************ 00:19:31.842 END TEST nvmf_tls 00:19:31.842 ************************************ 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:31.842 ************************************ 00:19:31.842 START TEST nvmf_fips 00:19:31.842 ************************************ 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:31.842 * Looking for test storage... 00:19:31.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:31.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.842 --rc genhtml_branch_coverage=1 00:19:31.842 --rc genhtml_function_coverage=1 00:19:31.842 --rc genhtml_legend=1 00:19:31.842 --rc geninfo_all_blocks=1 00:19:31.842 --rc geninfo_unexecuted_blocks=1 00:19:31.842 00:19:31.842 ' 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:31.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.842 --rc genhtml_branch_coverage=1 00:19:31.842 --rc genhtml_function_coverage=1 00:19:31.842 --rc genhtml_legend=1 00:19:31.842 --rc geninfo_all_blocks=1 00:19:31.842 --rc geninfo_unexecuted_blocks=1 00:19:31.842 00:19:31.842 ' 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:31.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.842 --rc genhtml_branch_coverage=1 00:19:31.842 --rc genhtml_function_coverage=1 00:19:31.842 --rc genhtml_legend=1 00:19:31.842 --rc geninfo_all_blocks=1 00:19:31.842 --rc geninfo_unexecuted_blocks=1 00:19:31.842 00:19:31.842 ' 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:31.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.842 --rc genhtml_branch_coverage=1 00:19:31.842 --rc genhtml_function_coverage=1 00:19:31.842 --rc genhtml_legend=1 00:19:31.842 --rc geninfo_all_blocks=1 00:19:31.842 --rc geninfo_unexecuted_blocks=1 00:19:31.842 00:19:31.842 ' 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:31.842 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:31.843 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:31.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:31.843 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:31.844 Error setting digest 00:19:31.844 4002D4152D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:31.844 4002D4152D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:31.844 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:38.417 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:38.417 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:38.417 Found net devices under 0000:86:00.0: cvl_0_0 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:38.417 Found net devices under 0000:86:00.1: cvl_0_1 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:38.417 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:38.417 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:38.417 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:38.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:38.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:19:38.418 00:19:38.418 --- 10.0.0.2 ping statistics --- 00:19:38.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.418 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:38.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:38.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:19:38.418 00:19:38.418 --- 10.0.0.1 ping statistics --- 00:19:38.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.418 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2872650 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2872650 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2872650 ']' 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:38.418 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:38.418 [2024-11-19 13:11:41.197764] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:38.418 [2024-11-19 13:11:41.197824] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.418 [2024-11-19 13:11:41.279168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.418 [2024-11-19 13:11:41.318461] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.418 [2024-11-19 13:11:41.318495] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.418 [2024-11-19 13:11:41.318502] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.418 [2024-11-19 13:11:41.318507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.418 [2024-11-19 13:11:41.318512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.418 [2024-11-19 13:11:41.319111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.677 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:38.677 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:38.677 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:38.677 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:38.677 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:38.936 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.936 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:38.936 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:38.936 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:38.936 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.AWK 00:19:38.936 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:38.936 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.AWK 00:19:38.936 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.AWK 00:19:38.936 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.AWK 00:19:38.936 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:38.936 [2024-11-19 13:11:42.239491] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.936 [2024-11-19 13:11:42.255498] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:38.936 [2024-11-19 13:11:42.255708] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:38.936 malloc0 00:19:39.195 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:39.195 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2872776 00:19:39.195 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2872776 /var/tmp/bdevperf.sock 00:19:39.196 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:39.196 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2872776 ']' 00:19:39.196 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.196 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.196 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.196 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.196 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:39.196 [2024-11-19 13:11:42.386998] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:39.196 [2024-11-19 13:11:42.387051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2872776 ] 00:19:39.196 [2024-11-19 13:11:42.462788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.196 [2024-11-19 13:11:42.503736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.131 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.131 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:40.131 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.AWK 00:19:40.131 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:40.390 [2024-11-19 13:11:43.580542] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:40.390 TLSTESTn1 00:19:40.390 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:40.649 Running I/O for 10 seconds... 00:19:42.521 5321.00 IOPS, 20.79 MiB/s [2024-11-19T12:11:46.835Z] 5361.50 IOPS, 20.94 MiB/s [2024-11-19T12:11:48.211Z] 5418.33 IOPS, 21.17 MiB/s [2024-11-19T12:11:49.148Z] 5405.00 IOPS, 21.11 MiB/s [2024-11-19T12:11:50.084Z] 5425.20 IOPS, 21.19 MiB/s [2024-11-19T12:11:51.021Z] 5426.00 IOPS, 21.20 MiB/s [2024-11-19T12:11:51.957Z] 5420.14 IOPS, 21.17 MiB/s [2024-11-19T12:11:52.893Z] 5430.00 IOPS, 21.21 MiB/s [2024-11-19T12:11:53.830Z] 5436.33 IOPS, 21.24 MiB/s [2024-11-19T12:11:53.830Z] 5444.50 IOPS, 21.27 MiB/s 00:19:50.453 Latency(us) 00:19:50.453 [2024-11-19T12:11:53.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.453 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:50.453 Verification LBA range: start 0x0 length 0x2000 00:19:50.453 TLSTESTn1 : 10.02 5447.51 21.28 0.00 0.00 23460.76 5698.78 22795.13 00:19:50.453 [2024-11-19T12:11:53.830Z] =================================================================================================================== 00:19:50.453 [2024-11-19T12:11:53.830Z] Total : 5447.51 21.28 0.00 0.00 23460.76 5698.78 22795.13 00:19:50.453 { 00:19:50.453 "results": [ 00:19:50.453 { 00:19:50.453 "job": "TLSTESTn1", 00:19:50.453 "core_mask": "0x4", 00:19:50.453 "workload": "verify", 00:19:50.453 "status": "finished", 00:19:50.453 "verify_range": { 00:19:50.453 "start": 0, 00:19:50.453 "length": 8192 00:19:50.453 }, 00:19:50.453 "queue_depth": 128, 00:19:50.453 "io_size": 4096, 00:19:50.453 "runtime": 10.017609, 00:19:50.453 "iops": 5447.507484071299, 00:19:50.453 "mibps": 21.27932610965351, 00:19:50.453 "io_failed": 0, 00:19:50.453 "io_timeout": 0, 00:19:50.453 "avg_latency_us": 23460.761787890206, 00:19:50.453 "min_latency_us": 5698.782608695652, 00:19:50.453 "max_latency_us": 22795.130434782608 00:19:50.453 } 00:19:50.453 ], 00:19:50.453 "core_count": 1 00:19:50.453 } 00:19:50.713 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:50.713 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:50.713 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:50.713 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:50.713 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:50.713 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:50.713 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:50.713 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:50.713 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:50.713 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:50.713 nvmf_trace.0 00:19:50.713 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:50.713 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2872776 00:19:50.713 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2872776 ']' 00:19:50.713 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2872776 00:19:50.713 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:50.713 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.713 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2872776 00:19:50.713 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:50.713 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:50.713 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2872776' 00:19:50.713 killing process with pid 2872776 00:19:50.713 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2872776 00:19:50.713 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.713 00:19:50.713 Latency(us) 00:19:50.713 [2024-11-19T12:11:54.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.713 [2024-11-19T12:11:54.090Z] =================================================================================================================== 00:19:50.713 [2024-11-19T12:11:54.090Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:50.713 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2872776 00:19:50.973 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:50.973 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:50.973 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:50.973 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:50.973 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:50.973 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:50.973 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:50.973 rmmod nvme_tcp 00:19:50.973 rmmod nvme_fabrics 00:19:50.973 rmmod nvme_keyring 00:19:50.973 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:50.973 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:50.973 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:50.973 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2872650 ']' 00:19:50.973 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2872650 00:19:50.973 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2872650 ']' 00:19:50.973 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2872650 00:19:50.973 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:50.973 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.973 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2872650 00:19:50.973 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:50.973 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:50.973 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2872650' 00:19:50.973 killing process with pid 2872650 00:19:50.973 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2872650 00:19:50.973 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2872650 00:19:51.233 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:51.233 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:51.233 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:51.233 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:51.233 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:51.233 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:51.233 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:51.233 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:51.233 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:51.233 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.233 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:51.233 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.140 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:53.140 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.AWK 00:19:53.140 00:19:53.140 real 0m21.684s 00:19:53.140 user 0m23.537s 00:19:53.140 sys 0m9.612s 00:19:53.140 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:53.140 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:53.140 ************************************ 00:19:53.140 END TEST nvmf_fips 00:19:53.140 ************************************ 00:19:53.400 13:11:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:53.400 13:11:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:53.400 13:11:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:53.400 13:11:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:53.400 ************************************ 00:19:53.400 START TEST nvmf_control_msg_list 00:19:53.400 ************************************ 00:19:53.400 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:53.400 * Looking for test storage... 00:19:53.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:53.400 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:53.400 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:19:53.400 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:53.400 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:53.400 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:53.400 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:53.400 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:53.400 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:53.400 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:53.400 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:53.400 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:53.400 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:53.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.401 --rc genhtml_branch_coverage=1 00:19:53.401 --rc genhtml_function_coverage=1 00:19:53.401 --rc genhtml_legend=1 00:19:53.401 --rc geninfo_all_blocks=1 00:19:53.401 --rc geninfo_unexecuted_blocks=1 00:19:53.401 00:19:53.401 ' 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:53.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.401 --rc genhtml_branch_coverage=1 00:19:53.401 --rc genhtml_function_coverage=1 00:19:53.401 --rc genhtml_legend=1 00:19:53.401 --rc geninfo_all_blocks=1 00:19:53.401 --rc geninfo_unexecuted_blocks=1 00:19:53.401 00:19:53.401 ' 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:53.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.401 --rc genhtml_branch_coverage=1 00:19:53.401 --rc genhtml_function_coverage=1 00:19:53.401 --rc genhtml_legend=1 00:19:53.401 --rc geninfo_all_blocks=1 00:19:53.401 --rc geninfo_unexecuted_blocks=1 00:19:53.401 00:19:53.401 ' 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:53.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.401 --rc genhtml_branch_coverage=1 00:19:53.401 --rc genhtml_function_coverage=1 00:19:53.401 --rc genhtml_legend=1 00:19:53.401 --rc geninfo_all_blocks=1 00:19:53.401 --rc geninfo_unexecuted_blocks=1 00:19:53.401 00:19:53.401 ' 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:53.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:53.401 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:53.402 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:53.402 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:53.661 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:53.661 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:53.661 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:53.661 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:53.661 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.661 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:53.661 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.661 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:53.661 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:53.661 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:53.661 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:00.233 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:00.233 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:00.233 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:00.233 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:00.233 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:00.233 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:00.233 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:00.233 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:00.233 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:00.233 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:00.233 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:00.233 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:00.233 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:00.233 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:00.233 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:00.233 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:00.233 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:00.233 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:00.233 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:00.234 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:00.234 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:00.234 Found net devices under 0000:86:00.0: cvl_0_0 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:00.234 Found net devices under 0000:86:00.1: cvl_0_1 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:00.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:20:00.234 00:20:00.234 --- 10.0.0.2 ping statistics --- 00:20:00.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.234 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:00.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:20:00.234 00:20:00.234 --- 10.0.0.1 ping statistics --- 00:20:00.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.234 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:00.234 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2878425 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2878425 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2878425 ']' 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:00.235 [2024-11-19 13:12:02.699808] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:20:00.235 [2024-11-19 13:12:02.699854] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.235 [2024-11-19 13:12:02.762449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.235 [2024-11-19 13:12:02.805124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.235 [2024-11-19 13:12:02.805158] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.235 [2024-11-19 13:12:02.805166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.235 [2024-11-19 13:12:02.805172] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.235 [2024-11-19 13:12:02.805177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.235 [2024-11-19 13:12:02.805741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:00.235 [2024-11-19 13:12:02.951031] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:00.235 Malloc0 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:00.235 [2024-11-19 13:12:02.991397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2878447 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2878448 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2878449 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2878447 00:20:00.235 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:00.235 [2024-11-19 13:12:03.069943] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:00.235 [2024-11-19 13:12:03.070166] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:00.235 [2024-11-19 13:12:03.080046] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:00.803 Initializing NVMe Controllers 00:20:00.803 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:00.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:00.803 Initialization complete. Launching workers. 00:20:00.803 ======================================================== 00:20:00.803 Latency(us) 00:20:00.803 Device Information : IOPS MiB/s Average min max 00:20:00.803 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3901.00 15.24 255.96 133.15 514.59 00:20:00.803 ======================================================== 00:20:00.803 Total : 3901.00 15.24 255.96 133.15 514.59 00:20:00.804 00:20:00.804 [2024-11-19 13:12:04.133970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1858f40 is same with the state(6) to be set 00:20:00.804 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2878448 00:20:00.804 Initializing NVMe Controllers 00:20:00.804 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:00.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:00.804 Initialization complete. Launching workers. 00:20:00.804 ======================================================== 00:20:00.804 Latency(us) 00:20:00.804 Device Information : IOPS MiB/s Average min max 00:20:00.804 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3879.00 15.15 257.38 141.71 502.86 00:20:00.804 ======================================================== 00:20:00.804 Total : 3879.00 15.15 257.38 141.71 502.86 00:20:00.804 00:20:00.804 [2024-11-19 13:12:04.143826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x185f900 is same with the state(6) to be set 00:20:00.804 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2878449 00:20:01.063 Initializing NVMe Controllers 00:20:01.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:01.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:01.063 Initialization complete. Launching workers. 00:20:01.063 ======================================================== 00:20:01.063 Latency(us) 00:20:01.063 Device Information : IOPS MiB/s Average min max 00:20:01.063 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 29.00 0.11 35271.36 261.01 41141.22 00:20:01.063 ======================================================== 00:20:01.063 Total : 29.00 0.11 35271.36 261.01 41141.22 00:20:01.063 00:20:01.063 [2024-11-19 13:12:04.197394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172a370 is same with the state(6) to be set 00:20:01.064 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:01.064 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:01.064 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:01.064 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:01.064 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:01.064 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:01.064 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:01.064 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:01.064 rmmod nvme_tcp 00:20:01.064 rmmod nvme_fabrics 00:20:01.064 rmmod nvme_keyring 00:20:01.064 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:01.064 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:01.064 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:01.064 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2878425 ']' 00:20:01.064 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2878425 00:20:01.064 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2878425 ']' 00:20:01.064 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2878425 00:20:01.064 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:01.064 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:01.064 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2878425 00:20:01.064 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:01.064 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:01.064 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2878425' 00:20:01.064 killing process with pid 2878425 00:20:01.064 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2878425 00:20:01.064 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2878425 00:20:01.323 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:01.323 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:01.323 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:01.323 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:01.323 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:01.323 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:01.323 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:01.323 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:01.323 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:01.323 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.323 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:01.323 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.231 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:03.231 00:20:03.231 real 0m9.984s 00:20:03.231 user 0m6.505s 00:20:03.231 sys 0m5.348s 00:20:03.231 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:03.231 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:03.231 ************************************ 00:20:03.231 END TEST nvmf_control_msg_list 00:20:03.231 ************************************ 00:20:03.231 13:12:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:03.231 13:12:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:03.231 13:12:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:03.231 13:12:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:03.491 ************************************ 00:20:03.491 START TEST nvmf_wait_for_buf 00:20:03.491 ************************************ 00:20:03.491 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:03.491 * Looking for test storage... 00:20:03.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:03.491 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:03.491 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:20:03.491 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:03.491 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:03.491 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:03.491 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:03.491 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:03.491 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:03.491 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:03.491 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:03.491 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:03.491 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:03.491 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:03.491 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:03.491 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:03.491 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:03.491 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:03.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.492 --rc genhtml_branch_coverage=1 00:20:03.492 --rc genhtml_function_coverage=1 00:20:03.492 --rc genhtml_legend=1 00:20:03.492 --rc geninfo_all_blocks=1 00:20:03.492 --rc geninfo_unexecuted_blocks=1 00:20:03.492 00:20:03.492 ' 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:03.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.492 --rc genhtml_branch_coverage=1 00:20:03.492 --rc genhtml_function_coverage=1 00:20:03.492 --rc genhtml_legend=1 00:20:03.492 --rc geninfo_all_blocks=1 00:20:03.492 --rc geninfo_unexecuted_blocks=1 00:20:03.492 00:20:03.492 ' 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:03.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.492 --rc genhtml_branch_coverage=1 00:20:03.492 --rc genhtml_function_coverage=1 00:20:03.492 --rc genhtml_legend=1 00:20:03.492 --rc geninfo_all_blocks=1 00:20:03.492 --rc geninfo_unexecuted_blocks=1 00:20:03.492 00:20:03.492 ' 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:03.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.492 --rc genhtml_branch_coverage=1 00:20:03.492 --rc genhtml_function_coverage=1 00:20:03.492 --rc genhtml_legend=1 00:20:03.492 --rc geninfo_all_blocks=1 00:20:03.492 --rc geninfo_unexecuted_blocks=1 00:20:03.492 00:20:03.492 ' 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:03.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:03.492 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:10.077 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:10.077 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:10.077 Found net devices under 0000:86:00.0: cvl_0_0 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.077 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:10.078 Found net devices under 0000:86:00.1: cvl_0_1 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:10.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:10.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:20:10.078 00:20:10.078 --- 10.0.0.2 ping statistics --- 00:20:10.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.078 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:10.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:10.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:20:10.078 00:20:10.078 --- 10.0.0.1 ping statistics --- 00:20:10.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.078 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2882599 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2882599 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2882599 ']' 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:10.078 [2024-11-19 13:12:12.844683] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:20:10.078 [2024-11-19 13:12:12.844736] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.078 [2024-11-19 13:12:12.926282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.078 [2024-11-19 13:12:12.967624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.078 [2024-11-19 13:12:12.967658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.078 [2024-11-19 13:12:12.967665] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.078 [2024-11-19 13:12:12.967671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.078 [2024-11-19 13:12:12.967676] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.078 [2024-11-19 13:12:12.968202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:10.078 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:10.078 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.078 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:10.078 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:10.078 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:10.078 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.078 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:10.078 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.078 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:10.078 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.078 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:10.078 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.078 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:10.078 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.078 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:10.078 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.078 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:10.078 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.078 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:10.078 Malloc0 00:20:10.078 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.078 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:10.079 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.079 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:10.079 [2024-11-19 13:12:13.132266] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.079 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.079 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:10.079 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.079 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:10.079 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.079 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:10.079 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.079 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:10.079 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.079 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:10.079 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.079 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:10.079 [2024-11-19 13:12:13.160445] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.079 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.079 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:10.079 [2024-11-19 13:12:13.249022] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:11.458 Initializing NVMe Controllers 00:20:11.458 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:11.458 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:11.458 Initialization complete. Launching workers. 00:20:11.458 ======================================================== 00:20:11.458 Latency(us) 00:20:11.458 Device Information : IOPS MiB/s Average min max 00:20:11.458 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 130.00 16.25 32082.63 7263.87 63847.50 00:20:11.458 ======================================================== 00:20:11.458 Total : 130.00 16.25 32082.63 7263.87 63847.50 00:20:11.458 00:20:11.458 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:11.458 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:11.458 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.458 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:11.458 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.458 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2054 00:20:11.458 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2054 -eq 0 ]] 00:20:11.458 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:11.458 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:11.458 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:11.459 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:11.459 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:11.459 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:11.459 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:11.459 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:11.459 rmmod nvme_tcp 00:20:11.459 rmmod nvme_fabrics 00:20:11.459 rmmod nvme_keyring 00:20:11.459 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:11.459 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:11.459 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:11.459 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2882599 ']' 00:20:11.459 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2882599 00:20:11.459 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2882599 ']' 00:20:11.459 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2882599 00:20:11.459 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:11.459 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:11.459 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2882599 00:20:11.459 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:11.459 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:11.459 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2882599' 00:20:11.459 killing process with pid 2882599 00:20:11.459 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2882599 00:20:11.459 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2882599 00:20:11.718 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:11.718 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:11.718 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:11.718 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:11.718 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:11.718 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:11.718 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:11.718 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:11.718 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:11.718 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.718 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.718 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.258 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:14.258 00:20:14.258 real 0m10.419s 00:20:14.258 user 0m3.980s 00:20:14.258 sys 0m4.885s 00:20:14.258 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:14.258 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:14.258 ************************************ 00:20:14.258 END TEST nvmf_wait_for_buf 00:20:14.258 ************************************ 00:20:14.258 13:12:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:14.258 13:12:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:14.258 13:12:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:14.258 13:12:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:14.258 13:12:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:14.258 13:12:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:19.535 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:19.535 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:19.535 Found net devices under 0000:86:00.0: cvl_0_0 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:19.535 Found net devices under 0000:86:00.1: cvl_0_1 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:19.535 13:12:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:19.535 ************************************ 00:20:19.535 START TEST nvmf_perf_adq 00:20:19.536 ************************************ 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:19.536 * Looking for test storage... 00:20:19.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:19.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.536 --rc genhtml_branch_coverage=1 00:20:19.536 --rc genhtml_function_coverage=1 00:20:19.536 --rc genhtml_legend=1 00:20:19.536 --rc geninfo_all_blocks=1 00:20:19.536 --rc geninfo_unexecuted_blocks=1 00:20:19.536 00:20:19.536 ' 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:19.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.536 --rc genhtml_branch_coverage=1 00:20:19.536 --rc genhtml_function_coverage=1 00:20:19.536 --rc genhtml_legend=1 00:20:19.536 --rc geninfo_all_blocks=1 00:20:19.536 --rc geninfo_unexecuted_blocks=1 00:20:19.536 00:20:19.536 ' 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:19.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.536 --rc genhtml_branch_coverage=1 00:20:19.536 --rc genhtml_function_coverage=1 00:20:19.536 --rc genhtml_legend=1 00:20:19.536 --rc geninfo_all_blocks=1 00:20:19.536 --rc geninfo_unexecuted_blocks=1 00:20:19.536 00:20:19.536 ' 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:19.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.536 --rc genhtml_branch_coverage=1 00:20:19.536 --rc genhtml_function_coverage=1 00:20:19.536 --rc genhtml_legend=1 00:20:19.536 --rc geninfo_all_blocks=1 00:20:19.536 --rc geninfo_unexecuted_blocks=1 00:20:19.536 00:20:19.536 ' 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:19.536 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:19.795 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:19.795 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:19.795 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:19.795 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:19.795 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:19.795 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:19.795 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:19.795 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:19.796 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.796 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.796 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.796 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.796 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.796 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.796 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:19.796 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.796 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:19.796 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:19.796 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:19.796 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:19.796 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:19.796 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:19.796 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:19.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:19.796 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:19.796 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:19.796 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:19.796 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:19.796 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:19.796 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:25.221 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:25.221 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:25.221 Found net devices under 0000:86:00.0: cvl_0_0 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.221 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:25.221 Found net devices under 0000:86:00.1: cvl_0_1 00:20:25.222 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.222 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:25.222 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:25.222 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:25.222 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:25.222 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:25.222 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:25.222 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:26.600 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:28.505 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:33.777 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:33.777 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:33.777 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:33.777 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:33.777 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:33.777 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:33.777 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.777 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:33.777 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.777 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:33.777 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:33.777 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:33.777 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:33.777 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:33.777 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:33.777 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:33.777 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:33.777 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:33.777 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:33.777 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:33.777 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:33.777 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:33.777 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:33.778 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:33.778 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:33.778 Found net devices under 0000:86:00.0: cvl_0_0 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:33.778 Found net devices under 0000:86:00.1: cvl_0_1 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:33.778 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:33.778 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:33.778 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:33.778 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:33.778 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:33.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:33.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:20:33.778 00:20:33.778 --- 10.0.0.2 ping statistics --- 00:20:33.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.778 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:20:33.778 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:33.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:33.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:20:33.778 00:20:33.778 --- 10.0.0.1 ping statistics --- 00:20:33.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.778 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:20:33.778 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:33.778 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:33.778 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:33.778 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:33.778 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:33.778 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:33.778 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:33.778 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:33.778 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:33.778 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:33.778 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:33.779 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:33.779 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:33.779 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2890946 00:20:33.779 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2890946 00:20:33.779 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:33.779 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2890946 ']' 00:20:33.779 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.779 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.779 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.779 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.779 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.038 [2024-11-19 13:12:37.166656] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:20:34.038 [2024-11-19 13:12:37.166708] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.038 [2024-11-19 13:12:37.247702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:34.038 [2024-11-19 13:12:37.290045] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.038 [2024-11-19 13:12:37.290084] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.038 [2024-11-19 13:12:37.290092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.038 [2024-11-19 13:12:37.290099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.038 [2024-11-19 13:12:37.290104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.038 [2024-11-19 13:12:37.291588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.038 [2024-11-19 13:12:37.291695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.038 [2024-11-19 13:12:37.291782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.038 [2024-11-19 13:12:37.291783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:34.038 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.038 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:34.038 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:34.038 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:34.038 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.038 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.038 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:34.038 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:34.038 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:34.038 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.038 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.038 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.038 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:34.038 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:34.038 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.038 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.038 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.038 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:34.038 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.038 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.297 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.297 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:34.297 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.297 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.297 [2024-11-19 13:12:37.493331] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.297 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.297 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:34.297 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.297 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.297 Malloc1 00:20:34.297 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.297 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:34.297 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.297 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.297 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.297 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:34.297 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.297 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.297 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.297 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:34.297 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.297 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.297 [2024-11-19 13:12:37.552594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.297 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.297 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2890974 00:20:34.297 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:34.298 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:36.201 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:36.201 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.201 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:36.460 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.460 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:36.460 "tick_rate": 2300000000, 00:20:36.460 "poll_groups": [ 00:20:36.460 { 00:20:36.460 "name": "nvmf_tgt_poll_group_000", 00:20:36.460 "admin_qpairs": 1, 00:20:36.460 "io_qpairs": 1, 00:20:36.460 "current_admin_qpairs": 1, 00:20:36.460 "current_io_qpairs": 1, 00:20:36.460 "pending_bdev_io": 0, 00:20:36.460 "completed_nvme_io": 19914, 00:20:36.460 "transports": [ 00:20:36.460 { 00:20:36.460 "trtype": "TCP" 00:20:36.460 } 00:20:36.460 ] 00:20:36.460 }, 00:20:36.460 { 00:20:36.460 "name": "nvmf_tgt_poll_group_001", 00:20:36.460 "admin_qpairs": 0, 00:20:36.460 "io_qpairs": 1, 00:20:36.460 "current_admin_qpairs": 0, 00:20:36.460 "current_io_qpairs": 1, 00:20:36.460 "pending_bdev_io": 0, 00:20:36.460 "completed_nvme_io": 20088, 00:20:36.461 "transports": [ 00:20:36.461 { 00:20:36.461 "trtype": "TCP" 00:20:36.461 } 00:20:36.461 ] 00:20:36.461 }, 00:20:36.461 { 00:20:36.461 "name": "nvmf_tgt_poll_group_002", 00:20:36.461 "admin_qpairs": 0, 00:20:36.461 "io_qpairs": 1, 00:20:36.461 "current_admin_qpairs": 0, 00:20:36.461 "current_io_qpairs": 1, 00:20:36.461 "pending_bdev_io": 0, 00:20:36.461 "completed_nvme_io": 20208, 00:20:36.461 "transports": [ 00:20:36.461 { 00:20:36.461 "trtype": "TCP" 00:20:36.461 } 00:20:36.461 ] 00:20:36.461 }, 00:20:36.461 { 00:20:36.461 "name": "nvmf_tgt_poll_group_003", 00:20:36.461 "admin_qpairs": 0, 00:20:36.461 "io_qpairs": 1, 00:20:36.461 "current_admin_qpairs": 0, 00:20:36.461 "current_io_qpairs": 1, 00:20:36.461 "pending_bdev_io": 0, 00:20:36.461 "completed_nvme_io": 19947, 00:20:36.461 "transports": [ 00:20:36.461 { 00:20:36.461 "trtype": "TCP" 00:20:36.461 } 00:20:36.461 ] 00:20:36.461 } 00:20:36.461 ] 00:20:36.461 }' 00:20:36.461 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:36.461 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:36.461 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:36.461 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:36.461 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2890974 00:20:44.577 Initializing NVMe Controllers 00:20:44.577 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:44.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:44.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:44.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:44.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:44.577 Initialization complete. Launching workers. 00:20:44.577 ======================================================== 00:20:44.577 Latency(us) 00:20:44.577 Device Information : IOPS MiB/s Average min max 00:20:44.577 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10542.50 41.18 6071.35 2308.74 10071.98 00:20:44.577 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10711.30 41.84 5975.04 2282.39 10137.25 00:20:44.577 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10625.10 41.50 6022.48 2009.41 10379.51 00:20:44.577 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10608.10 41.44 6033.47 1533.73 10850.89 00:20:44.577 ======================================================== 00:20:44.577 Total : 42486.99 165.96 6025.39 1533.73 10850.89 00:20:44.577 00:20:44.577 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:44.577 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:44.577 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:44.577 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:44.577 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:44.577 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:44.577 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:44.577 rmmod nvme_tcp 00:20:44.577 rmmod nvme_fabrics 00:20:44.577 rmmod nvme_keyring 00:20:44.577 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:44.577 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:44.577 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:44.577 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2890946 ']' 00:20:44.577 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2890946 00:20:44.577 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2890946 ']' 00:20:44.577 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2890946 00:20:44.577 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:44.577 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.577 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2890946 00:20:44.577 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:44.577 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:44.577 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2890946' 00:20:44.577 killing process with pid 2890946 00:20:44.577 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2890946 00:20:44.577 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2890946 00:20:44.836 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:44.836 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:44.836 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:44.836 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:44.836 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:44.836 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:44.836 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:44.836 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:44.836 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:44.836 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.836 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.836 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.740 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:46.740 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:46.740 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:46.740 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:48.117 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:50.019 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:55.292 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:55.292 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:55.292 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.292 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:55.292 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:55.292 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:55.292 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:55.293 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:55.293 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:55.293 Found net devices under 0000:86:00.0: cvl_0_0 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:55.293 Found net devices under 0000:86:00.1: cvl_0_1 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:55.293 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:55.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:20:55.294 00:20:55.294 --- 10.0.0.2 ping statistics --- 00:20:55.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.294 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:20:55.294 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:55.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:20:55.294 00:20:55.294 --- 10.0.0.1 ping statistics --- 00:20:55.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.294 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:20:55.294 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.294 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:55.294 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:55.294 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.294 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:55.294 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:55.294 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.294 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:55.294 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:55.294 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:55.294 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:55.294 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:55.294 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:55.294 net.core.busy_poll = 1 00:20:55.294 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:55.294 net.core.busy_read = 1 00:20:55.294 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:55.294 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:55.553 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:55.553 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:55.553 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:55.553 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:55.553 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:55.553 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:55.553 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:55.553 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2894757 00:20:55.553 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2894757 00:20:55.553 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:55.553 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2894757 ']' 00:20:55.553 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.553 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:55.553 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.553 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:55.553 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:55.553 [2024-11-19 13:12:58.793733] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:20:55.553 [2024-11-19 13:12:58.793781] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.553 [2024-11-19 13:12:58.873027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:55.553 [2024-11-19 13:12:58.915315] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.553 [2024-11-19 13:12:58.915353] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.553 [2024-11-19 13:12:58.915359] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.553 [2024-11-19 13:12:58.915365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.553 [2024-11-19 13:12:58.915370] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.553 [2024-11-19 13:12:58.916966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.553 [2024-11-19 13:12:58.917040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.553 [2024-11-19 13:12:58.917181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.553 [2024-11-19 13:12:58.917182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:56.489 [2024-11-19 13:12:59.809054] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:56.489 Malloc1 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.489 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:56.747 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.747 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:56.748 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.748 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:56.748 [2024-11-19 13:12:59.871515] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.748 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.748 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2895008 00:20:56.748 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:56.748 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:58.652 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:58.652 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.652 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:58.652 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.652 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:58.652 "tick_rate": 2300000000, 00:20:58.652 "poll_groups": [ 00:20:58.652 { 00:20:58.652 "name": "nvmf_tgt_poll_group_000", 00:20:58.652 "admin_qpairs": 1, 00:20:58.652 "io_qpairs": 2, 00:20:58.652 "current_admin_qpairs": 1, 00:20:58.652 "current_io_qpairs": 2, 00:20:58.652 "pending_bdev_io": 0, 00:20:58.652 "completed_nvme_io": 27485, 00:20:58.652 "transports": [ 00:20:58.652 { 00:20:58.652 "trtype": "TCP" 00:20:58.652 } 00:20:58.652 ] 00:20:58.652 }, 00:20:58.652 { 00:20:58.652 "name": "nvmf_tgt_poll_group_001", 00:20:58.652 "admin_qpairs": 0, 00:20:58.652 "io_qpairs": 2, 00:20:58.652 "current_admin_qpairs": 0, 00:20:58.652 "current_io_qpairs": 2, 00:20:58.652 "pending_bdev_io": 0, 00:20:58.652 "completed_nvme_io": 27545, 00:20:58.652 "transports": [ 00:20:58.652 { 00:20:58.652 "trtype": "TCP" 00:20:58.652 } 00:20:58.652 ] 00:20:58.652 }, 00:20:58.652 { 00:20:58.652 "name": "nvmf_tgt_poll_group_002", 00:20:58.652 "admin_qpairs": 0, 00:20:58.652 "io_qpairs": 0, 00:20:58.652 "current_admin_qpairs": 0, 00:20:58.652 "current_io_qpairs": 0, 00:20:58.652 "pending_bdev_io": 0, 00:20:58.652 "completed_nvme_io": 0, 00:20:58.652 "transports": [ 00:20:58.652 { 00:20:58.652 "trtype": "TCP" 00:20:58.652 } 00:20:58.652 ] 00:20:58.652 }, 00:20:58.652 { 00:20:58.652 "name": "nvmf_tgt_poll_group_003", 00:20:58.652 "admin_qpairs": 0, 00:20:58.652 "io_qpairs": 0, 00:20:58.652 "current_admin_qpairs": 0, 00:20:58.652 "current_io_qpairs": 0, 00:20:58.652 "pending_bdev_io": 0, 00:20:58.652 "completed_nvme_io": 0, 00:20:58.652 "transports": [ 00:20:58.652 { 00:20:58.652 "trtype": "TCP" 00:20:58.652 } 00:20:58.652 ] 00:20:58.652 } 00:20:58.652 ] 00:20:58.652 }' 00:20:58.652 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:58.652 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:58.652 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:58.652 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:58.652 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2895008 00:21:06.806 Initializing NVMe Controllers 00:21:06.806 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:06.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:06.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:06.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:06.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:06.806 Initialization complete. Launching workers. 00:21:06.806 ======================================================== 00:21:06.806 Latency(us) 00:21:06.806 Device Information : IOPS MiB/s Average min max 00:21:06.806 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7497.10 29.29 8537.51 1450.28 53230.40 00:21:06.806 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7889.60 30.82 8111.51 1546.95 52082.58 00:21:06.806 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6596.70 25.77 9701.01 1865.04 52749.25 00:21:06.806 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7088.60 27.69 9030.61 1533.78 54994.98 00:21:06.806 ======================================================== 00:21:06.806 Total : 29071.99 113.56 8806.14 1450.28 54994.98 00:21:06.806 00:21:06.806 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:06.806 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:06.806 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:06.806 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:06.806 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:06.806 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:06.806 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:06.806 rmmod nvme_tcp 00:21:06.806 rmmod nvme_fabrics 00:21:06.806 rmmod nvme_keyring 00:21:06.806 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:06.806 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:06.806 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:06.806 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2894757 ']' 00:21:06.806 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2894757 00:21:06.806 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2894757 ']' 00:21:06.806 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2894757 00:21:06.806 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:06.806 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:06.806 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2894757 00:21:06.806 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:06.806 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:06.806 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2894757' 00:21:06.806 killing process with pid 2894757 00:21:06.806 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2894757 00:21:06.806 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2894757 00:21:07.066 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:07.066 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:07.066 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:07.066 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:07.066 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:07.066 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:07.066 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:07.066 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:07.066 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:07.066 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.066 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.066 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:10.355 00:21:10.355 real 0m50.703s 00:21:10.355 user 2m46.433s 00:21:10.355 sys 0m10.455s 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:10.355 ************************************ 00:21:10.355 END TEST nvmf_perf_adq 00:21:10.355 ************************************ 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:10.355 ************************************ 00:21:10.355 START TEST nvmf_shutdown 00:21:10.355 ************************************ 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:10.355 * Looking for test storage... 00:21:10.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:10.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.355 --rc genhtml_branch_coverage=1 00:21:10.355 --rc genhtml_function_coverage=1 00:21:10.355 --rc genhtml_legend=1 00:21:10.355 --rc geninfo_all_blocks=1 00:21:10.355 --rc geninfo_unexecuted_blocks=1 00:21:10.355 00:21:10.355 ' 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:10.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.355 --rc genhtml_branch_coverage=1 00:21:10.355 --rc genhtml_function_coverage=1 00:21:10.355 --rc genhtml_legend=1 00:21:10.355 --rc geninfo_all_blocks=1 00:21:10.355 --rc geninfo_unexecuted_blocks=1 00:21:10.355 00:21:10.355 ' 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:10.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.355 --rc genhtml_branch_coverage=1 00:21:10.355 --rc genhtml_function_coverage=1 00:21:10.355 --rc genhtml_legend=1 00:21:10.355 --rc geninfo_all_blocks=1 00:21:10.355 --rc geninfo_unexecuted_blocks=1 00:21:10.355 00:21:10.355 ' 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:10.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.355 --rc genhtml_branch_coverage=1 00:21:10.355 --rc genhtml_function_coverage=1 00:21:10.355 --rc genhtml_legend=1 00:21:10.355 --rc geninfo_all_blocks=1 00:21:10.355 --rc geninfo_unexecuted_blocks=1 00:21:10.355 00:21:10.355 ' 00:21:10.355 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:10.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.356 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:10.616 ************************************ 00:21:10.616 START TEST nvmf_shutdown_tc1 00:21:10.616 ************************************ 00:21:10.616 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:21:10.616 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:10.616 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:10.616 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:10.616 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.616 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:10.616 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:10.616 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:10.616 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.616 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.616 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.616 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:10.616 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:10.616 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:10.616 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:17.187 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:17.187 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:17.187 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:17.187 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:17.187 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:17.187 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:17.187 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:17.187 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:17.187 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:17.187 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:17.187 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:17.187 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:17.187 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:17.187 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:17.187 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:17.187 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:17.187 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:17.187 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:17.187 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:17.187 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:17.187 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:17.187 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:17.188 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:17.188 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:17.188 Found net devices under 0000:86:00.0: cvl_0_0 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:17.188 Found net devices under 0000:86:00.1: cvl_0_1 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:17.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:17.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:17.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:21:17.188 00:21:17.188 --- 10.0.0.2 ping statistics --- 00:21:17.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.188 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:21:17.189 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:17.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:17.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:21:17.189 00:21:17.189 --- 10.0.0.1 ping statistics --- 00:21:17.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.189 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:21:17.189 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:17.189 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:21:17.189 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:17.189 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:17.189 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:17.189 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:17.189 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:17.189 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:17.189 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:17.189 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:17.189 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:17.189 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:17.189 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:17.189 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2900461 00:21:17.189 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2900461 00:21:17.189 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:17.189 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2900461 ']' 00:21:17.189 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.189 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.189 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.189 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.189 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:17.189 [2024-11-19 13:13:19.806390] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:21:17.189 [2024-11-19 13:13:19.806435] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.189 [2024-11-19 13:13:19.867761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:17.189 [2024-11-19 13:13:19.910433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.189 [2024-11-19 13:13:19.910470] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.189 [2024-11-19 13:13:19.910477] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.189 [2024-11-19 13:13:19.910483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.189 [2024-11-19 13:13:19.910488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.189 [2024-11-19 13:13:19.911984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.189 [2024-11-19 13:13:19.912024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:17.189 [2024-11-19 13:13:19.912131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.189 [2024-11-19 13:13:19.912132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:17.189 [2024-11-19 13:13:20.047869] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:17.189 Malloc1 00:21:17.189 [2024-11-19 13:13:20.157643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.189 Malloc2 00:21:17.189 Malloc3 00:21:17.189 Malloc4 00:21:17.189 Malloc5 00:21:17.189 Malloc6 00:21:17.189 Malloc7 00:21:17.189 Malloc8 00:21:17.189 Malloc9 00:21:17.189 Malloc10 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:17.189 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:17.449 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2900523 00:21:17.449 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2900523 /var/tmp/bdevperf.sock 00:21:17.449 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2900523 ']' 00:21:17.449 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:17.449 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.449 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:17.449 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.450 { 00:21:17.450 "params": { 00:21:17.450 "name": "Nvme$subsystem", 00:21:17.450 "trtype": "$TEST_TRANSPORT", 00:21:17.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.450 "adrfam": "ipv4", 00:21:17.450 "trsvcid": "$NVMF_PORT", 00:21:17.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.450 "hdgst": ${hdgst:-false}, 00:21:17.450 "ddgst": ${ddgst:-false} 00:21:17.450 }, 00:21:17.450 "method": "bdev_nvme_attach_controller" 00:21:17.450 } 00:21:17.450 EOF 00:21:17.450 )") 00:21:17.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.450 { 00:21:17.450 "params": { 00:21:17.450 "name": "Nvme$subsystem", 00:21:17.450 "trtype": "$TEST_TRANSPORT", 00:21:17.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.450 "adrfam": "ipv4", 00:21:17.450 "trsvcid": "$NVMF_PORT", 00:21:17.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.450 "hdgst": ${hdgst:-false}, 00:21:17.450 "ddgst": ${ddgst:-false} 00:21:17.450 }, 00:21:17.450 "method": "bdev_nvme_attach_controller" 00:21:17.450 } 00:21:17.450 EOF 00:21:17.450 )") 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.450 { 00:21:17.450 "params": { 00:21:17.450 "name": "Nvme$subsystem", 00:21:17.450 "trtype": "$TEST_TRANSPORT", 00:21:17.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.450 "adrfam": "ipv4", 00:21:17.450 "trsvcid": "$NVMF_PORT", 00:21:17.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.450 "hdgst": ${hdgst:-false}, 00:21:17.450 "ddgst": ${ddgst:-false} 00:21:17.450 }, 00:21:17.450 "method": "bdev_nvme_attach_controller" 00:21:17.450 } 00:21:17.450 EOF 00:21:17.450 )") 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.450 { 00:21:17.450 "params": { 00:21:17.450 "name": "Nvme$subsystem", 00:21:17.450 "trtype": "$TEST_TRANSPORT", 00:21:17.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.450 "adrfam": "ipv4", 00:21:17.450 "trsvcid": "$NVMF_PORT", 00:21:17.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.450 "hdgst": ${hdgst:-false}, 00:21:17.450 "ddgst": ${ddgst:-false} 00:21:17.450 }, 00:21:17.450 "method": "bdev_nvme_attach_controller" 00:21:17.450 } 00:21:17.450 EOF 00:21:17.450 )") 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.450 { 00:21:17.450 "params": { 00:21:17.450 "name": "Nvme$subsystem", 00:21:17.450 "trtype": "$TEST_TRANSPORT", 00:21:17.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.450 "adrfam": "ipv4", 00:21:17.450 "trsvcid": "$NVMF_PORT", 00:21:17.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.450 "hdgst": ${hdgst:-false}, 00:21:17.450 "ddgst": ${ddgst:-false} 00:21:17.450 }, 00:21:17.450 "method": "bdev_nvme_attach_controller" 00:21:17.450 } 00:21:17.450 EOF 00:21:17.450 )") 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.450 { 00:21:17.450 "params": { 00:21:17.450 "name": "Nvme$subsystem", 00:21:17.450 "trtype": "$TEST_TRANSPORT", 00:21:17.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.450 "adrfam": "ipv4", 00:21:17.450 "trsvcid": "$NVMF_PORT", 00:21:17.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.450 "hdgst": ${hdgst:-false}, 00:21:17.450 "ddgst": ${ddgst:-false} 00:21:17.450 }, 00:21:17.450 "method": "bdev_nvme_attach_controller" 00:21:17.450 } 00:21:17.450 EOF 00:21:17.450 )") 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.450 { 00:21:17.450 "params": { 00:21:17.450 "name": "Nvme$subsystem", 00:21:17.450 "trtype": "$TEST_TRANSPORT", 00:21:17.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.450 "adrfam": "ipv4", 00:21:17.450 "trsvcid": "$NVMF_PORT", 00:21:17.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.450 "hdgst": ${hdgst:-false}, 00:21:17.450 "ddgst": ${ddgst:-false} 00:21:17.450 }, 00:21:17.450 "method": "bdev_nvme_attach_controller" 00:21:17.450 } 00:21:17.450 EOF 00:21:17.450 )") 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:17.450 [2024-11-19 13:13:20.628145] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:21:17.450 [2024-11-19 13:13:20.628193] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.450 { 00:21:17.450 "params": { 00:21:17.450 "name": "Nvme$subsystem", 00:21:17.450 "trtype": "$TEST_TRANSPORT", 00:21:17.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.450 "adrfam": "ipv4", 00:21:17.450 "trsvcid": "$NVMF_PORT", 00:21:17.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.450 "hdgst": ${hdgst:-false}, 00:21:17.450 "ddgst": ${ddgst:-false} 00:21:17.450 }, 00:21:17.450 "method": "bdev_nvme_attach_controller" 00:21:17.450 } 00:21:17.450 EOF 00:21:17.450 )") 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.450 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.450 { 00:21:17.450 "params": { 00:21:17.450 "name": "Nvme$subsystem", 00:21:17.450 "trtype": "$TEST_TRANSPORT", 00:21:17.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.450 "adrfam": "ipv4", 00:21:17.450 "trsvcid": "$NVMF_PORT", 00:21:17.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.450 "hdgst": ${hdgst:-false}, 00:21:17.450 "ddgst": ${ddgst:-false} 00:21:17.450 }, 00:21:17.450 "method": "bdev_nvme_attach_controller" 00:21:17.450 } 00:21:17.450 EOF 00:21:17.451 )") 00:21:17.451 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:17.451 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.451 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.451 { 00:21:17.451 "params": { 00:21:17.451 "name": "Nvme$subsystem", 00:21:17.451 "trtype": "$TEST_TRANSPORT", 00:21:17.451 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.451 "adrfam": "ipv4", 00:21:17.451 "trsvcid": "$NVMF_PORT", 00:21:17.451 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.451 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.451 "hdgst": ${hdgst:-false}, 00:21:17.451 "ddgst": ${ddgst:-false} 00:21:17.451 }, 00:21:17.451 "method": "bdev_nvme_attach_controller" 00:21:17.451 } 00:21:17.451 EOF 00:21:17.451 )") 00:21:17.451 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:17.451 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:17.451 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:17.451 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:17.451 "params": { 00:21:17.451 "name": "Nvme1", 00:21:17.451 "trtype": "tcp", 00:21:17.451 "traddr": "10.0.0.2", 00:21:17.451 "adrfam": "ipv4", 00:21:17.451 "trsvcid": "4420", 00:21:17.451 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.451 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:17.451 "hdgst": false, 00:21:17.451 "ddgst": false 00:21:17.451 }, 00:21:17.451 "method": "bdev_nvme_attach_controller" 00:21:17.451 },{ 00:21:17.451 "params": { 00:21:17.451 "name": "Nvme2", 00:21:17.451 "trtype": "tcp", 00:21:17.451 "traddr": "10.0.0.2", 00:21:17.451 "adrfam": "ipv4", 00:21:17.451 "trsvcid": "4420", 00:21:17.451 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:17.451 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:17.451 "hdgst": false, 00:21:17.451 "ddgst": false 00:21:17.451 }, 00:21:17.451 "method": "bdev_nvme_attach_controller" 00:21:17.451 },{ 00:21:17.451 "params": { 00:21:17.451 "name": "Nvme3", 00:21:17.451 "trtype": "tcp", 00:21:17.451 "traddr": "10.0.0.2", 00:21:17.451 "adrfam": "ipv4", 00:21:17.451 "trsvcid": "4420", 00:21:17.451 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:17.451 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:17.451 "hdgst": false, 00:21:17.451 "ddgst": false 00:21:17.451 }, 00:21:17.451 "method": "bdev_nvme_attach_controller" 00:21:17.451 },{ 00:21:17.451 "params": { 00:21:17.451 "name": "Nvme4", 00:21:17.451 "trtype": "tcp", 00:21:17.451 "traddr": "10.0.0.2", 00:21:17.451 "adrfam": "ipv4", 00:21:17.451 "trsvcid": "4420", 00:21:17.451 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:17.451 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:17.451 "hdgst": false, 00:21:17.451 "ddgst": false 00:21:17.451 }, 00:21:17.451 "method": "bdev_nvme_attach_controller" 00:21:17.451 },{ 00:21:17.451 "params": { 00:21:17.451 "name": "Nvme5", 00:21:17.451 "trtype": "tcp", 00:21:17.451 "traddr": "10.0.0.2", 00:21:17.451 "adrfam": "ipv4", 00:21:17.451 "trsvcid": "4420", 00:21:17.451 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:17.451 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:17.451 "hdgst": false, 00:21:17.451 "ddgst": false 00:21:17.451 }, 00:21:17.451 "method": "bdev_nvme_attach_controller" 00:21:17.451 },{ 00:21:17.451 "params": { 00:21:17.451 "name": "Nvme6", 00:21:17.451 "trtype": "tcp", 00:21:17.451 "traddr": "10.0.0.2", 00:21:17.451 "adrfam": "ipv4", 00:21:17.451 "trsvcid": "4420", 00:21:17.451 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:17.451 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:17.451 "hdgst": false, 00:21:17.451 "ddgst": false 00:21:17.451 }, 00:21:17.451 "method": "bdev_nvme_attach_controller" 00:21:17.451 },{ 00:21:17.451 "params": { 00:21:17.451 "name": "Nvme7", 00:21:17.451 "trtype": "tcp", 00:21:17.451 "traddr": "10.0.0.2", 00:21:17.451 "adrfam": "ipv4", 00:21:17.451 "trsvcid": "4420", 00:21:17.451 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:17.451 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:17.451 "hdgst": false, 00:21:17.451 "ddgst": false 00:21:17.451 }, 00:21:17.451 "method": "bdev_nvme_attach_controller" 00:21:17.451 },{ 00:21:17.451 "params": { 00:21:17.451 "name": "Nvme8", 00:21:17.451 "trtype": "tcp", 00:21:17.451 "traddr": "10.0.0.2", 00:21:17.451 "adrfam": "ipv4", 00:21:17.451 "trsvcid": "4420", 00:21:17.451 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:17.451 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:17.451 "hdgst": false, 00:21:17.451 "ddgst": false 00:21:17.451 }, 00:21:17.451 "method": "bdev_nvme_attach_controller" 00:21:17.451 },{ 00:21:17.451 "params": { 00:21:17.451 "name": "Nvme9", 00:21:17.451 "trtype": "tcp", 00:21:17.451 "traddr": "10.0.0.2", 00:21:17.451 "adrfam": "ipv4", 00:21:17.451 "trsvcid": "4420", 00:21:17.451 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:17.451 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:17.451 "hdgst": false, 00:21:17.451 "ddgst": false 00:21:17.451 }, 00:21:17.451 "method": "bdev_nvme_attach_controller" 00:21:17.451 },{ 00:21:17.451 "params": { 00:21:17.451 "name": "Nvme10", 00:21:17.451 "trtype": "tcp", 00:21:17.451 "traddr": "10.0.0.2", 00:21:17.451 "adrfam": "ipv4", 00:21:17.451 "trsvcid": "4420", 00:21:17.451 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:17.451 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:17.451 "hdgst": false, 00:21:17.451 "ddgst": false 00:21:17.451 }, 00:21:17.451 "method": "bdev_nvme_attach_controller" 00:21:17.451 }' 00:21:17.451 [2024-11-19 13:13:20.703897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.451 [2024-11-19 13:13:20.745348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.357 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.357 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:19.357 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:19.357 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.357 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:19.357 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.357 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2900523 00:21:19.357 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:19.357 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:20.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2900523 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:20.293 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2900461 00:21:20.293 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:20.293 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:20.293 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:20.294 { 00:21:20.294 "params": { 00:21:20.294 "name": "Nvme$subsystem", 00:21:20.294 "trtype": "$TEST_TRANSPORT", 00:21:20.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.294 "adrfam": "ipv4", 00:21:20.294 "trsvcid": "$NVMF_PORT", 00:21:20.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.294 "hdgst": ${hdgst:-false}, 00:21:20.294 "ddgst": ${ddgst:-false} 00:21:20.294 }, 00:21:20.294 "method": "bdev_nvme_attach_controller" 00:21:20.294 } 00:21:20.294 EOF 00:21:20.294 )") 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:20.294 { 00:21:20.294 "params": { 00:21:20.294 "name": "Nvme$subsystem", 00:21:20.294 "trtype": "$TEST_TRANSPORT", 00:21:20.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.294 "adrfam": "ipv4", 00:21:20.294 "trsvcid": "$NVMF_PORT", 00:21:20.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.294 "hdgst": ${hdgst:-false}, 00:21:20.294 "ddgst": ${ddgst:-false} 00:21:20.294 }, 00:21:20.294 "method": "bdev_nvme_attach_controller" 00:21:20.294 } 00:21:20.294 EOF 00:21:20.294 )") 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:20.294 { 00:21:20.294 "params": { 00:21:20.294 "name": "Nvme$subsystem", 00:21:20.294 "trtype": "$TEST_TRANSPORT", 00:21:20.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.294 "adrfam": "ipv4", 00:21:20.294 "trsvcid": "$NVMF_PORT", 00:21:20.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.294 "hdgst": ${hdgst:-false}, 00:21:20.294 "ddgst": ${ddgst:-false} 00:21:20.294 }, 00:21:20.294 "method": "bdev_nvme_attach_controller" 00:21:20.294 } 00:21:20.294 EOF 00:21:20.294 )") 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:20.294 { 00:21:20.294 "params": { 00:21:20.294 "name": "Nvme$subsystem", 00:21:20.294 "trtype": "$TEST_TRANSPORT", 00:21:20.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.294 "adrfam": "ipv4", 00:21:20.294 "trsvcid": "$NVMF_PORT", 00:21:20.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.294 "hdgst": ${hdgst:-false}, 00:21:20.294 "ddgst": ${ddgst:-false} 00:21:20.294 }, 00:21:20.294 "method": "bdev_nvme_attach_controller" 00:21:20.294 } 00:21:20.294 EOF 00:21:20.294 )") 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:20.294 { 00:21:20.294 "params": { 00:21:20.294 "name": "Nvme$subsystem", 00:21:20.294 "trtype": "$TEST_TRANSPORT", 00:21:20.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.294 "adrfam": "ipv4", 00:21:20.294 "trsvcid": "$NVMF_PORT", 00:21:20.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.294 "hdgst": ${hdgst:-false}, 00:21:20.294 "ddgst": ${ddgst:-false} 00:21:20.294 }, 00:21:20.294 "method": "bdev_nvme_attach_controller" 00:21:20.294 } 00:21:20.294 EOF 00:21:20.294 )") 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:20.294 { 00:21:20.294 "params": { 00:21:20.294 "name": "Nvme$subsystem", 00:21:20.294 "trtype": "$TEST_TRANSPORT", 00:21:20.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.294 "adrfam": "ipv4", 00:21:20.294 "trsvcid": "$NVMF_PORT", 00:21:20.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.294 "hdgst": ${hdgst:-false}, 00:21:20.294 "ddgst": ${ddgst:-false} 00:21:20.294 }, 00:21:20.294 "method": "bdev_nvme_attach_controller" 00:21:20.294 } 00:21:20.294 EOF 00:21:20.294 )") 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:20.294 { 00:21:20.294 "params": { 00:21:20.294 "name": "Nvme$subsystem", 00:21:20.294 "trtype": "$TEST_TRANSPORT", 00:21:20.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.294 "adrfam": "ipv4", 00:21:20.294 "trsvcid": "$NVMF_PORT", 00:21:20.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.294 "hdgst": ${hdgst:-false}, 00:21:20.294 "ddgst": ${ddgst:-false} 00:21:20.294 }, 00:21:20.294 "method": "bdev_nvme_attach_controller" 00:21:20.294 } 00:21:20.294 EOF 00:21:20.294 )") 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:20.294 [2024-11-19 13:13:23.560327] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:21:20.294 [2024-11-19 13:13:23.560375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2901062 ] 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:20.294 { 00:21:20.294 "params": { 00:21:20.294 "name": "Nvme$subsystem", 00:21:20.294 "trtype": "$TEST_TRANSPORT", 00:21:20.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.294 "adrfam": "ipv4", 00:21:20.294 "trsvcid": "$NVMF_PORT", 00:21:20.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.294 "hdgst": ${hdgst:-false}, 00:21:20.294 "ddgst": ${ddgst:-false} 00:21:20.294 }, 00:21:20.294 "method": "bdev_nvme_attach_controller" 00:21:20.294 } 00:21:20.294 EOF 00:21:20.294 )") 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:20.294 { 00:21:20.294 "params": { 00:21:20.294 "name": "Nvme$subsystem", 00:21:20.294 "trtype": "$TEST_TRANSPORT", 00:21:20.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.294 "adrfam": "ipv4", 00:21:20.294 "trsvcid": "$NVMF_PORT", 00:21:20.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.294 "hdgst": ${hdgst:-false}, 00:21:20.294 "ddgst": ${ddgst:-false} 00:21:20.294 }, 00:21:20.294 "method": "bdev_nvme_attach_controller" 00:21:20.294 } 00:21:20.294 EOF 00:21:20.294 )") 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:20.294 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:20.294 { 00:21:20.294 "params": { 00:21:20.294 "name": "Nvme$subsystem", 00:21:20.295 "trtype": "$TEST_TRANSPORT", 00:21:20.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.295 "adrfam": "ipv4", 00:21:20.295 "trsvcid": "$NVMF_PORT", 00:21:20.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.295 "hdgst": ${hdgst:-false}, 00:21:20.295 "ddgst": ${ddgst:-false} 00:21:20.295 }, 00:21:20.295 "method": "bdev_nvme_attach_controller" 00:21:20.295 } 00:21:20.295 EOF 00:21:20.295 )") 00:21:20.295 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:20.295 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:20.295 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:20.295 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:20.295 "params": { 00:21:20.295 "name": "Nvme1", 00:21:20.295 "trtype": "tcp", 00:21:20.295 "traddr": "10.0.0.2", 00:21:20.295 "adrfam": "ipv4", 00:21:20.295 "trsvcid": "4420", 00:21:20.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.295 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:20.295 "hdgst": false, 00:21:20.295 "ddgst": false 00:21:20.295 }, 00:21:20.295 "method": "bdev_nvme_attach_controller" 00:21:20.295 },{ 00:21:20.295 "params": { 00:21:20.295 "name": "Nvme2", 00:21:20.295 "trtype": "tcp", 00:21:20.295 "traddr": "10.0.0.2", 00:21:20.295 "adrfam": "ipv4", 00:21:20.295 "trsvcid": "4420", 00:21:20.295 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:20.295 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:20.295 "hdgst": false, 00:21:20.295 "ddgst": false 00:21:20.295 }, 00:21:20.295 "method": "bdev_nvme_attach_controller" 00:21:20.295 },{ 00:21:20.295 "params": { 00:21:20.295 "name": "Nvme3", 00:21:20.295 "trtype": "tcp", 00:21:20.295 "traddr": "10.0.0.2", 00:21:20.295 "adrfam": "ipv4", 00:21:20.295 "trsvcid": "4420", 00:21:20.295 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:20.295 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:20.295 "hdgst": false, 00:21:20.295 "ddgst": false 00:21:20.295 }, 00:21:20.295 "method": "bdev_nvme_attach_controller" 00:21:20.295 },{ 00:21:20.295 "params": { 00:21:20.295 "name": "Nvme4", 00:21:20.295 "trtype": "tcp", 00:21:20.295 "traddr": "10.0.0.2", 00:21:20.295 "adrfam": "ipv4", 00:21:20.295 "trsvcid": "4420", 00:21:20.295 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:20.295 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:20.295 "hdgst": false, 00:21:20.295 "ddgst": false 00:21:20.295 }, 00:21:20.295 "method": "bdev_nvme_attach_controller" 00:21:20.295 },{ 00:21:20.295 "params": { 00:21:20.295 "name": "Nvme5", 00:21:20.295 "trtype": "tcp", 00:21:20.295 "traddr": "10.0.0.2", 00:21:20.295 "adrfam": "ipv4", 00:21:20.295 "trsvcid": "4420", 00:21:20.295 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:20.295 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:20.295 "hdgst": false, 00:21:20.295 "ddgst": false 00:21:20.295 }, 00:21:20.295 "method": "bdev_nvme_attach_controller" 00:21:20.295 },{ 00:21:20.295 "params": { 00:21:20.295 "name": "Nvme6", 00:21:20.295 "trtype": "tcp", 00:21:20.295 "traddr": "10.0.0.2", 00:21:20.295 "adrfam": "ipv4", 00:21:20.295 "trsvcid": "4420", 00:21:20.295 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:20.295 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:20.295 "hdgst": false, 00:21:20.295 "ddgst": false 00:21:20.295 }, 00:21:20.295 "method": "bdev_nvme_attach_controller" 00:21:20.295 },{ 00:21:20.295 "params": { 00:21:20.295 "name": "Nvme7", 00:21:20.295 "trtype": "tcp", 00:21:20.295 "traddr": "10.0.0.2", 00:21:20.295 "adrfam": "ipv4", 00:21:20.295 "trsvcid": "4420", 00:21:20.295 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:20.295 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:20.295 "hdgst": false, 00:21:20.295 "ddgst": false 00:21:20.295 }, 00:21:20.295 "method": "bdev_nvme_attach_controller" 00:21:20.295 },{ 00:21:20.295 "params": { 00:21:20.295 "name": "Nvme8", 00:21:20.295 "trtype": "tcp", 00:21:20.295 "traddr": "10.0.0.2", 00:21:20.295 "adrfam": "ipv4", 00:21:20.295 "trsvcid": "4420", 00:21:20.295 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:20.295 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:20.295 "hdgst": false, 00:21:20.295 "ddgst": false 00:21:20.295 }, 00:21:20.295 "method": "bdev_nvme_attach_controller" 00:21:20.295 },{ 00:21:20.295 "params": { 00:21:20.295 "name": "Nvme9", 00:21:20.295 "trtype": "tcp", 00:21:20.295 "traddr": "10.0.0.2", 00:21:20.295 "adrfam": "ipv4", 00:21:20.295 "trsvcid": "4420", 00:21:20.295 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:20.295 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:20.295 "hdgst": false, 00:21:20.295 "ddgst": false 00:21:20.295 }, 00:21:20.295 "method": "bdev_nvme_attach_controller" 00:21:20.295 },{ 00:21:20.295 "params": { 00:21:20.295 "name": "Nvme10", 00:21:20.295 "trtype": "tcp", 00:21:20.295 "traddr": "10.0.0.2", 00:21:20.295 "adrfam": "ipv4", 00:21:20.295 "trsvcid": "4420", 00:21:20.295 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:20.295 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:20.295 "hdgst": false, 00:21:20.295 "ddgst": false 00:21:20.295 }, 00:21:20.295 "method": "bdev_nvme_attach_controller" 00:21:20.295 }' 00:21:20.295 [2024-11-19 13:13:23.635080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.554 [2024-11-19 13:13:23.676831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.931 Running I/O for 1 seconds... 00:21:23.127 2194.00 IOPS, 137.12 MiB/s 00:21:23.127 Latency(us) 00:21:23.127 [2024-11-19T12:13:26.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.127 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.127 Verification LBA range: start 0x0 length 0x400 00:21:23.127 Nvme1n1 : 1.17 274.45 17.15 0.00 0.00 231048.24 16754.42 220656.86 00:21:23.127 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.127 Verification LBA range: start 0x0 length 0x400 00:21:23.127 Nvme2n1 : 1.03 248.34 15.52 0.00 0.00 251058.75 18692.01 218833.25 00:21:23.127 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.127 Verification LBA range: start 0x0 length 0x400 00:21:23.127 Nvme3n1 : 1.12 289.11 18.07 0.00 0.00 212078.44 8377.21 218833.25 00:21:23.127 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.127 Verification LBA range: start 0x0 length 0x400 00:21:23.127 Nvme4n1 : 1.15 285.18 17.82 0.00 0.00 212163.64 4530.53 222480.47 00:21:23.127 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.127 Verification LBA range: start 0x0 length 0x400 00:21:23.127 Nvme5n1 : 1.18 271.97 17.00 0.00 0.00 220038.54 16640.45 217009.64 00:21:23.127 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.127 Verification LBA range: start 0x0 length 0x400 00:21:23.127 Nvme6n1 : 1.18 272.16 17.01 0.00 0.00 217112.58 18578.03 222480.47 00:21:23.127 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.127 Verification LBA range: start 0x0 length 0x400 00:21:23.127 Nvme7n1 : 1.16 275.21 17.20 0.00 0.00 211274.00 16298.52 219745.06 00:21:23.127 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.127 Verification LBA range: start 0x0 length 0x400 00:21:23.127 Nvme8n1 : 1.17 273.41 17.09 0.00 0.00 209645.39 17096.35 217009.64 00:21:23.127 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.127 Verification LBA range: start 0x0 length 0x400 00:21:23.127 Nvme9n1 : 1.18 270.43 16.90 0.00 0.00 208699.48 4587.52 235245.75 00:21:23.127 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.127 Verification LBA range: start 0x0 length 0x400 00:21:23.127 Nvme10n1 : 1.18 274.96 17.18 0.00 0.00 202014.63 16070.57 237069.36 00:21:23.127 [2024-11-19T12:13:26.504Z] =================================================================================================================== 00:21:23.127 [2024-11-19T12:13:26.504Z] Total : 2735.21 170.95 0.00 0.00 216789.14 4530.53 237069.36 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:23.387 rmmod nvme_tcp 00:21:23.387 rmmod nvme_fabrics 00:21:23.387 rmmod nvme_keyring 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2900461 ']' 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2900461 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2900461 ']' 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2900461 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2900461 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2900461' 00:21:23.387 killing process with pid 2900461 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2900461 00:21:23.387 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2900461 00:21:23.955 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:23.955 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:23.955 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:23.955 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:23.955 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:21:23.955 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:23.955 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:21:23.955 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:23.955 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:23.955 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.955 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:23.955 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:25.862 00:21:25.862 real 0m15.360s 00:21:25.862 user 0m34.470s 00:21:25.862 sys 0m5.823s 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:25.862 ************************************ 00:21:25.862 END TEST nvmf_shutdown_tc1 00:21:25.862 ************************************ 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:25.862 ************************************ 00:21:25.862 START TEST nvmf_shutdown_tc2 00:21:25.862 ************************************ 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:25.862 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:25.863 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:25.863 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:25.863 Found net devices under 0000:86:00.0: cvl_0_0 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:25.863 Found net devices under 0000:86:00.1: cvl_0_1 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:25.863 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:26.123 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:26.123 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:26.123 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:26.123 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:26.123 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:26.123 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:26.123 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:26.123 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:26.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:26.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:21:26.123 00:21:26.123 --- 10.0.0.2 ping statistics --- 00:21:26.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.123 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:21:26.123 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:26.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:26.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:21:26.123 00:21:26.123 --- 10.0.0.1 ping statistics --- 00:21:26.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.123 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:21:26.123 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:26.123 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:21:26.123 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:26.123 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:26.123 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:26.123 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:26.123 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:26.123 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:26.123 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:26.382 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:26.382 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:26.382 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:26.382 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:26.382 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2902249 00:21:26.382 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2902249 00:21:26.382 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:26.382 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2902249 ']' 00:21:26.382 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.382 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:26.382 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.382 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:26.382 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:26.382 [2024-11-19 13:13:29.558345] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:21:26.382 [2024-11-19 13:13:29.558387] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.382 [2024-11-19 13:13:29.624124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:26.382 [2024-11-19 13:13:29.667099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.382 [2024-11-19 13:13:29.667136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.382 [2024-11-19 13:13:29.667143] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.382 [2024-11-19 13:13:29.667150] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.382 [2024-11-19 13:13:29.667155] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.382 [2024-11-19 13:13:29.668755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.382 [2024-11-19 13:13:29.668872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:26.382 [2024-11-19 13:13:29.668986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.382 [2024-11-19 13:13:29.668986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:26.642 [2024-11-19 13:13:29.809204] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.642 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:26.642 Malloc1 00:21:26.642 [2024-11-19 13:13:29.911013] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.642 Malloc2 00:21:26.642 Malloc3 00:21:26.642 Malloc4 00:21:26.901 Malloc5 00:21:26.901 Malloc6 00:21:26.901 Malloc7 00:21:26.901 Malloc8 00:21:26.901 Malloc9 00:21:26.901 Malloc10 00:21:27.161 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.161 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:27.161 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:27.161 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:27.161 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2902303 00:21:27.161 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2902303 /var/tmp/bdevperf.sock 00:21:27.161 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2902303 ']' 00:21:27.161 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:27.161 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:27.161 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:27.161 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.161 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:27.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:27.161 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:21:27.161 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.161 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:21:27.161 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:27.161 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:27.161 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:27.161 { 00:21:27.161 "params": { 00:21:27.161 "name": "Nvme$subsystem", 00:21:27.161 "trtype": "$TEST_TRANSPORT", 00:21:27.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.161 "adrfam": "ipv4", 00:21:27.161 "trsvcid": "$NVMF_PORT", 00:21:27.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.161 "hdgst": ${hdgst:-false}, 00:21:27.161 "ddgst": ${ddgst:-false} 00:21:27.161 }, 00:21:27.161 "method": "bdev_nvme_attach_controller" 00:21:27.161 } 00:21:27.161 EOF 00:21:27.161 )") 00:21:27.161 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:27.161 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:27.161 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:27.161 { 00:21:27.161 "params": { 00:21:27.161 "name": "Nvme$subsystem", 00:21:27.161 "trtype": "$TEST_TRANSPORT", 00:21:27.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.161 "adrfam": "ipv4", 00:21:27.161 "trsvcid": "$NVMF_PORT", 00:21:27.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.161 "hdgst": ${hdgst:-false}, 00:21:27.161 "ddgst": ${ddgst:-false} 00:21:27.161 }, 00:21:27.162 "method": "bdev_nvme_attach_controller" 00:21:27.162 } 00:21:27.162 EOF 00:21:27.162 )") 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:27.162 { 00:21:27.162 "params": { 00:21:27.162 "name": "Nvme$subsystem", 00:21:27.162 "trtype": "$TEST_TRANSPORT", 00:21:27.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.162 "adrfam": "ipv4", 00:21:27.162 "trsvcid": "$NVMF_PORT", 00:21:27.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.162 "hdgst": ${hdgst:-false}, 00:21:27.162 "ddgst": ${ddgst:-false} 00:21:27.162 }, 00:21:27.162 "method": "bdev_nvme_attach_controller" 00:21:27.162 } 00:21:27.162 EOF 00:21:27.162 )") 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:27.162 { 00:21:27.162 "params": { 00:21:27.162 "name": "Nvme$subsystem", 00:21:27.162 "trtype": "$TEST_TRANSPORT", 00:21:27.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.162 "adrfam": "ipv4", 00:21:27.162 "trsvcid": "$NVMF_PORT", 00:21:27.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.162 "hdgst": ${hdgst:-false}, 00:21:27.162 "ddgst": ${ddgst:-false} 00:21:27.162 }, 00:21:27.162 "method": "bdev_nvme_attach_controller" 00:21:27.162 } 00:21:27.162 EOF 00:21:27.162 )") 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:27.162 { 00:21:27.162 "params": { 00:21:27.162 "name": "Nvme$subsystem", 00:21:27.162 "trtype": "$TEST_TRANSPORT", 00:21:27.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.162 "adrfam": "ipv4", 00:21:27.162 "trsvcid": "$NVMF_PORT", 00:21:27.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.162 "hdgst": ${hdgst:-false}, 00:21:27.162 "ddgst": ${ddgst:-false} 00:21:27.162 }, 00:21:27.162 "method": "bdev_nvme_attach_controller" 00:21:27.162 } 00:21:27.162 EOF 00:21:27.162 )") 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:27.162 { 00:21:27.162 "params": { 00:21:27.162 "name": "Nvme$subsystem", 00:21:27.162 "trtype": "$TEST_TRANSPORT", 00:21:27.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.162 "adrfam": "ipv4", 00:21:27.162 "trsvcid": "$NVMF_PORT", 00:21:27.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.162 "hdgst": ${hdgst:-false}, 00:21:27.162 "ddgst": ${ddgst:-false} 00:21:27.162 }, 00:21:27.162 "method": "bdev_nvme_attach_controller" 00:21:27.162 } 00:21:27.162 EOF 00:21:27.162 )") 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:27.162 { 00:21:27.162 "params": { 00:21:27.162 "name": "Nvme$subsystem", 00:21:27.162 "trtype": "$TEST_TRANSPORT", 00:21:27.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.162 "adrfam": "ipv4", 00:21:27.162 "trsvcid": "$NVMF_PORT", 00:21:27.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.162 "hdgst": ${hdgst:-false}, 00:21:27.162 "ddgst": ${ddgst:-false} 00:21:27.162 }, 00:21:27.162 "method": "bdev_nvme_attach_controller" 00:21:27.162 } 00:21:27.162 EOF 00:21:27.162 )") 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:27.162 [2024-11-19 13:13:30.384326] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:21:27.162 [2024-11-19 13:13:30.384372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902303 ] 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:27.162 { 00:21:27.162 "params": { 00:21:27.162 "name": "Nvme$subsystem", 00:21:27.162 "trtype": "$TEST_TRANSPORT", 00:21:27.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.162 "adrfam": "ipv4", 00:21:27.162 "trsvcid": "$NVMF_PORT", 00:21:27.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.162 "hdgst": ${hdgst:-false}, 00:21:27.162 "ddgst": ${ddgst:-false} 00:21:27.162 }, 00:21:27.162 "method": "bdev_nvme_attach_controller" 00:21:27.162 } 00:21:27.162 EOF 00:21:27.162 )") 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:27.162 { 00:21:27.162 "params": { 00:21:27.162 "name": "Nvme$subsystem", 00:21:27.162 "trtype": "$TEST_TRANSPORT", 00:21:27.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.162 "adrfam": "ipv4", 00:21:27.162 "trsvcid": "$NVMF_PORT", 00:21:27.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.162 "hdgst": ${hdgst:-false}, 00:21:27.162 "ddgst": ${ddgst:-false} 00:21:27.162 }, 00:21:27.162 "method": "bdev_nvme_attach_controller" 00:21:27.162 } 00:21:27.162 EOF 00:21:27.162 )") 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:27.162 { 00:21:27.162 "params": { 00:21:27.162 "name": "Nvme$subsystem", 00:21:27.162 "trtype": "$TEST_TRANSPORT", 00:21:27.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.162 "adrfam": "ipv4", 00:21:27.162 "trsvcid": "$NVMF_PORT", 00:21:27.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.162 "hdgst": ${hdgst:-false}, 00:21:27.162 "ddgst": ${ddgst:-false} 00:21:27.162 }, 00:21:27.162 "method": "bdev_nvme_attach_controller" 00:21:27.162 } 00:21:27.162 EOF 00:21:27.162 )") 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:21:27.162 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:27.162 "params": { 00:21:27.162 "name": "Nvme1", 00:21:27.162 "trtype": "tcp", 00:21:27.162 "traddr": "10.0.0.2", 00:21:27.162 "adrfam": "ipv4", 00:21:27.162 "trsvcid": "4420", 00:21:27.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:27.162 "hdgst": false, 00:21:27.162 "ddgst": false 00:21:27.162 }, 00:21:27.162 "method": "bdev_nvme_attach_controller" 00:21:27.162 },{ 00:21:27.162 "params": { 00:21:27.162 "name": "Nvme2", 00:21:27.162 "trtype": "tcp", 00:21:27.162 "traddr": "10.0.0.2", 00:21:27.162 "adrfam": "ipv4", 00:21:27.162 "trsvcid": "4420", 00:21:27.162 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:27.162 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:27.162 "hdgst": false, 00:21:27.162 "ddgst": false 00:21:27.162 }, 00:21:27.162 "method": "bdev_nvme_attach_controller" 00:21:27.162 },{ 00:21:27.162 "params": { 00:21:27.162 "name": "Nvme3", 00:21:27.162 "trtype": "tcp", 00:21:27.162 "traddr": "10.0.0.2", 00:21:27.162 "adrfam": "ipv4", 00:21:27.162 "trsvcid": "4420", 00:21:27.162 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:27.162 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:27.162 "hdgst": false, 00:21:27.162 "ddgst": false 00:21:27.162 }, 00:21:27.162 "method": "bdev_nvme_attach_controller" 00:21:27.162 },{ 00:21:27.162 "params": { 00:21:27.162 "name": "Nvme4", 00:21:27.163 "trtype": "tcp", 00:21:27.163 "traddr": "10.0.0.2", 00:21:27.163 "adrfam": "ipv4", 00:21:27.163 "trsvcid": "4420", 00:21:27.163 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:27.163 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:27.163 "hdgst": false, 00:21:27.163 "ddgst": false 00:21:27.163 }, 00:21:27.163 "method": "bdev_nvme_attach_controller" 00:21:27.163 },{ 00:21:27.163 "params": { 00:21:27.163 "name": "Nvme5", 00:21:27.163 "trtype": "tcp", 00:21:27.163 "traddr": "10.0.0.2", 00:21:27.163 "adrfam": "ipv4", 00:21:27.163 "trsvcid": "4420", 00:21:27.163 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:27.163 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:27.163 "hdgst": false, 00:21:27.163 "ddgst": false 00:21:27.163 }, 00:21:27.163 "method": "bdev_nvme_attach_controller" 00:21:27.163 },{ 00:21:27.163 "params": { 00:21:27.163 "name": "Nvme6", 00:21:27.163 "trtype": "tcp", 00:21:27.163 "traddr": "10.0.0.2", 00:21:27.163 "adrfam": "ipv4", 00:21:27.163 "trsvcid": "4420", 00:21:27.163 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:27.163 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:27.163 "hdgst": false, 00:21:27.163 "ddgst": false 00:21:27.163 }, 00:21:27.163 "method": "bdev_nvme_attach_controller" 00:21:27.163 },{ 00:21:27.163 "params": { 00:21:27.163 "name": "Nvme7", 00:21:27.163 "trtype": "tcp", 00:21:27.163 "traddr": "10.0.0.2", 00:21:27.163 "adrfam": "ipv4", 00:21:27.163 "trsvcid": "4420", 00:21:27.163 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:27.163 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:27.163 "hdgst": false, 00:21:27.163 "ddgst": false 00:21:27.163 }, 00:21:27.163 "method": "bdev_nvme_attach_controller" 00:21:27.163 },{ 00:21:27.163 "params": { 00:21:27.163 "name": "Nvme8", 00:21:27.163 "trtype": "tcp", 00:21:27.163 "traddr": "10.0.0.2", 00:21:27.163 "adrfam": "ipv4", 00:21:27.163 "trsvcid": "4420", 00:21:27.163 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:27.163 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:27.163 "hdgst": false, 00:21:27.163 "ddgst": false 00:21:27.163 }, 00:21:27.163 "method": "bdev_nvme_attach_controller" 00:21:27.163 },{ 00:21:27.163 "params": { 00:21:27.163 "name": "Nvme9", 00:21:27.163 "trtype": "tcp", 00:21:27.163 "traddr": "10.0.0.2", 00:21:27.163 "adrfam": "ipv4", 00:21:27.163 "trsvcid": "4420", 00:21:27.163 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:27.163 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:27.163 "hdgst": false, 00:21:27.163 "ddgst": false 00:21:27.163 }, 00:21:27.163 "method": "bdev_nvme_attach_controller" 00:21:27.163 },{ 00:21:27.163 "params": { 00:21:27.163 "name": "Nvme10", 00:21:27.163 "trtype": "tcp", 00:21:27.163 "traddr": "10.0.0.2", 00:21:27.163 "adrfam": "ipv4", 00:21:27.163 "trsvcid": "4420", 00:21:27.163 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:27.163 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:27.163 "hdgst": false, 00:21:27.163 "ddgst": false 00:21:27.163 }, 00:21:27.163 "method": "bdev_nvme_attach_controller" 00:21:27.163 }' 00:21:27.163 [2024-11-19 13:13:30.460503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.163 [2024-11-19 13:13:30.501921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.579 Running I/O for 10 seconds... 00:21:29.216 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:29.216 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:29.216 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:29.216 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.216 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:29.216 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.217 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:29.217 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:29.217 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:29.217 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:29.217 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:29.217 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:29.217 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:29.217 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:29.217 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:29.217 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.217 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:29.217 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.217 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:29.217 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:29.217 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:29.217 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:29.217 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:29.217 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:29.217 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:29.217 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.217 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:29.476 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.476 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:29.476 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:29.476 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:29.476 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:29.476 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:29.476 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2902303 00:21:29.476 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2902303 ']' 00:21:29.476 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2902303 00:21:29.476 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:29.476 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.476 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2902303 00:21:29.476 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:29.476 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:29.476 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2902303' 00:21:29.476 killing process with pid 2902303 00:21:29.476 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2902303 00:21:29.476 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2902303 00:21:29.476 Received shutdown signal, test time was about 0.870128 seconds 00:21:29.476 00:21:29.476 Latency(us) 00:21:29.476 [2024-11-19T12:13:32.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.476 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:29.476 Verification LBA range: start 0x0 length 0x400 00:21:29.476 Nvme1n1 : 0.86 298.29 18.64 0.00 0.00 211929.04 25872.47 206979.78 00:21:29.476 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:29.476 Verification LBA range: start 0x0 length 0x400 00:21:29.476 Nvme2n1 : 0.86 303.06 18.94 0.00 0.00 203793.22 6411.13 199685.34 00:21:29.476 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:29.476 Verification LBA range: start 0x0 length 0x400 00:21:29.476 Nvme3n1 : 0.85 300.35 18.77 0.00 0.00 202563.56 13449.13 220656.86 00:21:29.476 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:29.476 Verification LBA range: start 0x0 length 0x400 00:21:29.476 Nvme4n1 : 0.85 301.40 18.84 0.00 0.00 197821.44 26442.35 205156.17 00:21:29.476 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:29.476 Verification LBA range: start 0x0 length 0x400 00:21:29.476 Nvme5n1 : 0.87 294.44 18.40 0.00 0.00 198024.46 16184.54 221568.67 00:21:29.476 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:29.476 Verification LBA range: start 0x0 length 0x400 00:21:29.476 Nvme6n1 : 0.85 226.95 14.18 0.00 0.00 252045.58 19603.81 253481.85 00:21:29.476 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:29.476 Verification LBA range: start 0x0 length 0x400 00:21:29.476 Nvme7n1 : 0.86 296.99 18.56 0.00 0.00 188804.45 17552.25 217009.64 00:21:29.476 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:29.476 Verification LBA range: start 0x0 length 0x400 00:21:29.476 Nvme8n1 : 0.83 231.57 14.47 0.00 0.00 235826.75 14531.90 226127.69 00:21:29.476 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:29.476 Verification LBA range: start 0x0 length 0x400 00:21:29.476 Nvme9n1 : 0.84 227.78 14.24 0.00 0.00 234932.61 18008.15 228863.11 00:21:29.476 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:29.476 Verification LBA range: start 0x0 length 0x400 00:21:29.476 Nvme10n1 : 0.84 228.97 14.31 0.00 0.00 228233.57 17780.20 218833.25 00:21:29.476 [2024-11-19T12:13:32.853Z] =================================================================================================================== 00:21:29.476 [2024-11-19T12:13:32.853Z] Total : 2709.80 169.36 0.00 0.00 212893.04 6411.13 253481.85 00:21:29.735 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:30.673 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2902249 00:21:30.673 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:30.673 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:30.673 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:30.673 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:30.673 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:30.673 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:30.673 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:30.673 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:30.673 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:30.673 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:30.673 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:30.673 rmmod nvme_tcp 00:21:30.673 rmmod nvme_fabrics 00:21:30.673 rmmod nvme_keyring 00:21:30.673 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:30.673 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:30.673 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:30.673 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2902249 ']' 00:21:30.673 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2902249 00:21:30.673 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2902249 ']' 00:21:30.673 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2902249 00:21:30.673 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:30.673 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.673 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2902249 00:21:30.932 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:30.932 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:30.932 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2902249' 00:21:30.932 killing process with pid 2902249 00:21:30.932 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2902249 00:21:30.932 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2902249 00:21:31.193 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:31.193 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:31.193 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:31.193 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:31.193 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:31.193 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:31.193 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:31.193 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:31.193 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:31.193 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.193 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.193 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:33.730 00:21:33.730 real 0m7.312s 00:21:33.730 user 0m21.503s 00:21:33.730 sys 0m1.296s 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:33.730 ************************************ 00:21:33.730 END TEST nvmf_shutdown_tc2 00:21:33.730 ************************************ 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:33.730 ************************************ 00:21:33.730 START TEST nvmf_shutdown_tc3 00:21:33.730 ************************************ 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:33.730 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:33.731 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:33.731 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:33.731 Found net devices under 0000:86:00.0: cvl_0_0 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:33.731 Found net devices under 0000:86:00.1: cvl_0_1 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:33.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:21:33.731 00:21:33.731 --- 10.0.0.2 ping statistics --- 00:21:33.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.731 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:21:33.731 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:33.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:21:33.731 00:21:33.731 --- 10.0.0.1 ping statistics --- 00:21:33.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.732 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:21:33.732 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.732 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:33.732 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:33.732 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.732 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:33.732 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:33.732 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.732 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:33.732 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:33.732 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:33.732 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:33.732 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:33.732 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:33.732 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2903575 00:21:33.732 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2903575 00:21:33.732 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:33.732 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2903575 ']' 00:21:33.732 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.732 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:33.732 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.732 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:33.732 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:33.732 [2024-11-19 13:13:36.960420] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:21:33.732 [2024-11-19 13:13:36.960463] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.732 [2024-11-19 13:13:37.039111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:33.732 [2024-11-19 13:13:37.081970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.732 [2024-11-19 13:13:37.082007] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.732 [2024-11-19 13:13:37.082014] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.732 [2024-11-19 13:13:37.082021] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.732 [2024-11-19 13:13:37.082026] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.732 [2024-11-19 13:13:37.083621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.732 [2024-11-19 13:13:37.083729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:33.732 [2024-11-19 13:13:37.083811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.732 [2024-11-19 13:13:37.083812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:33.990 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:33.990 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:33.990 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:33.990 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:33.990 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:33.990 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.990 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:33.990 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.990 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:33.991 [2024-11-19 13:13:37.227604] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.991 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:33.991 Malloc1 00:21:33.991 [2024-11-19 13:13:37.335015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.991 Malloc2 00:21:34.249 Malloc3 00:21:34.249 Malloc4 00:21:34.249 Malloc5 00:21:34.249 Malloc6 00:21:34.249 Malloc7 00:21:34.249 Malloc8 00:21:34.508 Malloc9 00:21:34.508 Malloc10 00:21:34.508 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.508 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:34.508 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:34.508 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:34.508 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2903717 00:21:34.508 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2903717 /var/tmp/bdevperf.sock 00:21:34.508 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2903717 ']' 00:21:34.508 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.508 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:34.508 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:34.508 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.508 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.508 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:34.508 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.508 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:34.508 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:34.508 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:34.508 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:34.508 { 00:21:34.508 "params": { 00:21:34.508 "name": "Nvme$subsystem", 00:21:34.508 "trtype": "$TEST_TRANSPORT", 00:21:34.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.508 "adrfam": "ipv4", 00:21:34.508 "trsvcid": "$NVMF_PORT", 00:21:34.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.508 "hdgst": ${hdgst:-false}, 00:21:34.508 "ddgst": ${ddgst:-false} 00:21:34.508 }, 00:21:34.508 "method": "bdev_nvme_attach_controller" 00:21:34.508 } 00:21:34.508 EOF 00:21:34.508 )") 00:21:34.508 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:34.508 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:34.508 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:34.508 { 00:21:34.508 "params": { 00:21:34.508 "name": "Nvme$subsystem", 00:21:34.508 "trtype": "$TEST_TRANSPORT", 00:21:34.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.508 "adrfam": "ipv4", 00:21:34.508 "trsvcid": "$NVMF_PORT", 00:21:34.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.508 "hdgst": ${hdgst:-false}, 00:21:34.508 "ddgst": ${ddgst:-false} 00:21:34.508 }, 00:21:34.508 "method": "bdev_nvme_attach_controller" 00:21:34.508 } 00:21:34.508 EOF 00:21:34.508 )") 00:21:34.508 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:34.508 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:34.508 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:34.508 { 00:21:34.508 "params": { 00:21:34.508 "name": "Nvme$subsystem", 00:21:34.508 "trtype": "$TEST_TRANSPORT", 00:21:34.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.508 "adrfam": "ipv4", 00:21:34.508 "trsvcid": "$NVMF_PORT", 00:21:34.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.508 "hdgst": ${hdgst:-false}, 00:21:34.508 "ddgst": ${ddgst:-false} 00:21:34.509 }, 00:21:34.509 "method": "bdev_nvme_attach_controller" 00:21:34.509 } 00:21:34.509 EOF 00:21:34.509 )") 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:34.509 { 00:21:34.509 "params": { 00:21:34.509 "name": "Nvme$subsystem", 00:21:34.509 "trtype": "$TEST_TRANSPORT", 00:21:34.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.509 "adrfam": "ipv4", 00:21:34.509 "trsvcid": "$NVMF_PORT", 00:21:34.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.509 "hdgst": ${hdgst:-false}, 00:21:34.509 "ddgst": ${ddgst:-false} 00:21:34.509 }, 00:21:34.509 "method": "bdev_nvme_attach_controller" 00:21:34.509 } 00:21:34.509 EOF 00:21:34.509 )") 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:34.509 { 00:21:34.509 "params": { 00:21:34.509 "name": "Nvme$subsystem", 00:21:34.509 "trtype": "$TEST_TRANSPORT", 00:21:34.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.509 "adrfam": "ipv4", 00:21:34.509 "trsvcid": "$NVMF_PORT", 00:21:34.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.509 "hdgst": ${hdgst:-false}, 00:21:34.509 "ddgst": ${ddgst:-false} 00:21:34.509 }, 00:21:34.509 "method": "bdev_nvme_attach_controller" 00:21:34.509 } 00:21:34.509 EOF 00:21:34.509 )") 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:34.509 { 00:21:34.509 "params": { 00:21:34.509 "name": "Nvme$subsystem", 00:21:34.509 "trtype": "$TEST_TRANSPORT", 00:21:34.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.509 "adrfam": "ipv4", 00:21:34.509 "trsvcid": "$NVMF_PORT", 00:21:34.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.509 "hdgst": ${hdgst:-false}, 00:21:34.509 "ddgst": ${ddgst:-false} 00:21:34.509 }, 00:21:34.509 "method": "bdev_nvme_attach_controller" 00:21:34.509 } 00:21:34.509 EOF 00:21:34.509 )") 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:34.509 { 00:21:34.509 "params": { 00:21:34.509 "name": "Nvme$subsystem", 00:21:34.509 "trtype": "$TEST_TRANSPORT", 00:21:34.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.509 "adrfam": "ipv4", 00:21:34.509 "trsvcid": "$NVMF_PORT", 00:21:34.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.509 "hdgst": ${hdgst:-false}, 00:21:34.509 "ddgst": ${ddgst:-false} 00:21:34.509 }, 00:21:34.509 "method": "bdev_nvme_attach_controller" 00:21:34.509 } 00:21:34.509 EOF 00:21:34.509 )") 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:34.509 [2024-11-19 13:13:37.807113] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:21:34.509 [2024-11-19 13:13:37.807164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903717 ] 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:34.509 { 00:21:34.509 "params": { 00:21:34.509 "name": "Nvme$subsystem", 00:21:34.509 "trtype": "$TEST_TRANSPORT", 00:21:34.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.509 "adrfam": "ipv4", 00:21:34.509 "trsvcid": "$NVMF_PORT", 00:21:34.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.509 "hdgst": ${hdgst:-false}, 00:21:34.509 "ddgst": ${ddgst:-false} 00:21:34.509 }, 00:21:34.509 "method": "bdev_nvme_attach_controller" 00:21:34.509 } 00:21:34.509 EOF 00:21:34.509 )") 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:34.509 { 00:21:34.509 "params": { 00:21:34.509 "name": "Nvme$subsystem", 00:21:34.509 "trtype": "$TEST_TRANSPORT", 00:21:34.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.509 "adrfam": "ipv4", 00:21:34.509 "trsvcid": "$NVMF_PORT", 00:21:34.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.509 "hdgst": ${hdgst:-false}, 00:21:34.509 "ddgst": ${ddgst:-false} 00:21:34.509 }, 00:21:34.509 "method": "bdev_nvme_attach_controller" 00:21:34.509 } 00:21:34.509 EOF 00:21:34.509 )") 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:34.509 { 00:21:34.509 "params": { 00:21:34.509 "name": "Nvme$subsystem", 00:21:34.509 "trtype": "$TEST_TRANSPORT", 00:21:34.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.509 "adrfam": "ipv4", 00:21:34.509 "trsvcid": "$NVMF_PORT", 00:21:34.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.509 "hdgst": ${hdgst:-false}, 00:21:34.509 "ddgst": ${ddgst:-false} 00:21:34.509 }, 00:21:34.509 "method": "bdev_nvme_attach_controller" 00:21:34.509 } 00:21:34.509 EOF 00:21:34.509 )") 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:34.509 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:34.509 "params": { 00:21:34.509 "name": "Nvme1", 00:21:34.509 "trtype": "tcp", 00:21:34.509 "traddr": "10.0.0.2", 00:21:34.509 "adrfam": "ipv4", 00:21:34.509 "trsvcid": "4420", 00:21:34.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.509 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:34.509 "hdgst": false, 00:21:34.509 "ddgst": false 00:21:34.509 }, 00:21:34.509 "method": "bdev_nvme_attach_controller" 00:21:34.509 },{ 00:21:34.509 "params": { 00:21:34.509 "name": "Nvme2", 00:21:34.509 "trtype": "tcp", 00:21:34.509 "traddr": "10.0.0.2", 00:21:34.509 "adrfam": "ipv4", 00:21:34.509 "trsvcid": "4420", 00:21:34.509 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:34.509 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:34.509 "hdgst": false, 00:21:34.509 "ddgst": false 00:21:34.509 }, 00:21:34.509 "method": "bdev_nvme_attach_controller" 00:21:34.509 },{ 00:21:34.509 "params": { 00:21:34.509 "name": "Nvme3", 00:21:34.509 "trtype": "tcp", 00:21:34.509 "traddr": "10.0.0.2", 00:21:34.509 "adrfam": "ipv4", 00:21:34.509 "trsvcid": "4420", 00:21:34.509 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:34.509 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:34.509 "hdgst": false, 00:21:34.509 "ddgst": false 00:21:34.509 }, 00:21:34.509 "method": "bdev_nvme_attach_controller" 00:21:34.509 },{ 00:21:34.509 "params": { 00:21:34.509 "name": "Nvme4", 00:21:34.509 "trtype": "tcp", 00:21:34.509 "traddr": "10.0.0.2", 00:21:34.509 "adrfam": "ipv4", 00:21:34.509 "trsvcid": "4420", 00:21:34.509 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:34.509 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:34.509 "hdgst": false, 00:21:34.509 "ddgst": false 00:21:34.509 }, 00:21:34.509 "method": "bdev_nvme_attach_controller" 00:21:34.509 },{ 00:21:34.509 "params": { 00:21:34.509 "name": "Nvme5", 00:21:34.509 "trtype": "tcp", 00:21:34.509 "traddr": "10.0.0.2", 00:21:34.509 "adrfam": "ipv4", 00:21:34.509 "trsvcid": "4420", 00:21:34.509 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:34.509 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:34.509 "hdgst": false, 00:21:34.509 "ddgst": false 00:21:34.509 }, 00:21:34.509 "method": "bdev_nvme_attach_controller" 00:21:34.509 },{ 00:21:34.509 "params": { 00:21:34.509 "name": "Nvme6", 00:21:34.509 "trtype": "tcp", 00:21:34.509 "traddr": "10.0.0.2", 00:21:34.510 "adrfam": "ipv4", 00:21:34.510 "trsvcid": "4420", 00:21:34.510 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:34.510 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:34.510 "hdgst": false, 00:21:34.510 "ddgst": false 00:21:34.510 }, 00:21:34.510 "method": "bdev_nvme_attach_controller" 00:21:34.510 },{ 00:21:34.510 "params": { 00:21:34.510 "name": "Nvme7", 00:21:34.510 "trtype": "tcp", 00:21:34.510 "traddr": "10.0.0.2", 00:21:34.510 "adrfam": "ipv4", 00:21:34.510 "trsvcid": "4420", 00:21:34.510 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:34.510 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:34.510 "hdgst": false, 00:21:34.510 "ddgst": false 00:21:34.510 }, 00:21:34.510 "method": "bdev_nvme_attach_controller" 00:21:34.510 },{ 00:21:34.510 "params": { 00:21:34.510 "name": "Nvme8", 00:21:34.510 "trtype": "tcp", 00:21:34.510 "traddr": "10.0.0.2", 00:21:34.510 "adrfam": "ipv4", 00:21:34.510 "trsvcid": "4420", 00:21:34.510 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:34.510 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:34.510 "hdgst": false, 00:21:34.510 "ddgst": false 00:21:34.510 }, 00:21:34.510 "method": "bdev_nvme_attach_controller" 00:21:34.510 },{ 00:21:34.510 "params": { 00:21:34.510 "name": "Nvme9", 00:21:34.510 "trtype": "tcp", 00:21:34.510 "traddr": "10.0.0.2", 00:21:34.510 "adrfam": "ipv4", 00:21:34.510 "trsvcid": "4420", 00:21:34.510 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:34.510 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:34.510 "hdgst": false, 00:21:34.510 "ddgst": false 00:21:34.510 }, 00:21:34.510 "method": "bdev_nvme_attach_controller" 00:21:34.510 },{ 00:21:34.510 "params": { 00:21:34.510 "name": "Nvme10", 00:21:34.510 "trtype": "tcp", 00:21:34.510 "traddr": "10.0.0.2", 00:21:34.510 "adrfam": "ipv4", 00:21:34.510 "trsvcid": "4420", 00:21:34.510 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:34.510 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:34.510 "hdgst": false, 00:21:34.510 "ddgst": false 00:21:34.510 }, 00:21:34.510 "method": "bdev_nvme_attach_controller" 00:21:34.510 }' 00:21:34.769 [2024-11-19 13:13:37.884976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.769 [2024-11-19 13:13:37.926288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.145 Running I/O for 10 seconds... 00:21:36.407 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.407 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:36.407 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:36.407 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.407 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:36.407 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.407 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:36.407 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:36.407 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:36.407 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:36.407 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:36.407 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:36.407 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:36.407 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:36.407 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:36.407 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:36.407 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.407 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:36.407 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.407 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:36.407 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:36.407 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:36.666 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:36.666 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:36.666 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:36.666 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:36.666 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.666 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:36.666 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.939 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:36.939 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:36.939 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:36.939 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:36.939 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:36.939 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2903575 00:21:36.939 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2903575 ']' 00:21:36.939 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2903575 00:21:36.939 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:36.939 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:36.939 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2903575 00:21:36.939 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:36.939 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:36.939 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2903575' 00:21:36.939 killing process with pid 2903575 00:21:36.939 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2903575 00:21:36.939 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2903575 00:21:36.939 [2024-11-19 13:13:40.112232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c700 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.112281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c700 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.112290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c700 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.112297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c700 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.939 [2024-11-19 13:13:40.113499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.113505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.113512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.113519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.113530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.113537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.113543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.113550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.113556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.113563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.113569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.113576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f180 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.114995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.115213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8cbf0 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.116324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.940 [2024-11-19 13:13:40.116354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.940 [2024-11-19 13:13:40.116364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.940 [2024-11-19 13:13:40.116371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.940 [2024-11-19 13:13:40.116379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.940 [2024-11-19 13:13:40.116386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.940 [2024-11-19 13:13:40.116393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.940 [2024-11-19 13:13:40.116400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.940 [2024-11-19 13:13:40.116407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fdc40 is same with the state(6) to be set 00:21:36.940 [2024-11-19 13:13:40.116461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.941 [2024-11-19 13:13:40.116474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.941 [2024-11-19 13:13:40.116482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.941 [2024-11-19 13:13:40.116489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.941 [2024-11-19 13:13:40.116496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.941 [2024-11-19 13:13:40.116502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.941 [2024-11-19 13:13:40.116509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.941 [2024-11-19 13:13:40.116516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.941 [2024-11-19 13:13:40.116523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586d50 is same with the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.116544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.941 [2024-11-19 13:13:40.116552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.941 [2024-11-19 13:13:40.116560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.941 [2024-11-19 13:13:40.116566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.941 [2024-11-19 13:13:40.116573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.941 [2024-11-19 13:13:40.116580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.941 [2024-11-19 13:13:40.116588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.941 [2024-11-19 13:13:40.116595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.941 [2024-11-19 13:13:40.116601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1584c70 is same with the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.116634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.941 [2024-11-19 13:13:40.116642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.941 [2024-11-19 13:13:40.116649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.941 [2024-11-19 13:13:40.116656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.941 [2024-11-19 13:13:40.116664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.941 [2024-11-19 13:13:40.116670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.941 [2024-11-19 13:13:40.116681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.941 [2024-11-19 13:13:40.116688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.941 [2024-11-19 13:13:40.116697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15871b0 is same with the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with [2024-11-19 13:13:40.117350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1the state(6) to be set 00:21:36.941 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.941 [2024-11-19 13:13:40.117368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.941 [2024-11-19 13:13:40.117376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.941 [2024-11-19 13:13:40.117392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 13:13:40.117400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.941 the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.941 [2024-11-19 13:13:40.117416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.941 [2024-11-19 13:13:40.117425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.941 [2024-11-19 13:13:40.117432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.941 [2024-11-19 13:13:40.117440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:1[2024-11-19 13:13:40.117447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.941 the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 13:13:40.117457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.941 the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:1[2024-11-19 13:13:40.117471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.941 the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.941 [2024-11-19 13:13:40.117482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with [2024-11-19 13:13:40.117491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:1the state(6) to be set 00:21:36.941 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.941 [2024-11-19 13:13:40.117500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.941 [2024-11-19 13:13:40.117507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.941 [2024-11-19 13:13:40.117515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.941 [2024-11-19 13:13:40.117522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.941 [2024-11-19 13:13:40.117531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.941 [2024-11-19 13:13:40.117539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:1[2024-11-19 13:13:40.117546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.941 the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 13:13:40.117555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.941 the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with [2024-11-19 13:13:40.117566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:1the state(6) to be set 00:21:36.941 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.941 [2024-11-19 13:13:40.117574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.941 [2024-11-19 13:13:40.117581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:1[2024-11-19 13:13:40.117589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.941 the state(6) to be set 00:21:36.941 [2024-11-19 13:13:40.117598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 13:13:40.117599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 13:13:40.117610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 13:13:40.117622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 13:13:40.117630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 13:13:40.117638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 13:13:40.117646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 13:13:40.117655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with [2024-11-19 13:13:40.117662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128the state(6) to be set 00:21:36.942 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 13:13:40.117671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with [2024-11-19 13:13:40.117671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:36.942 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 13:13:40.117681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 13:13:40.117690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with [2024-11-19 13:13:40.117691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:36.942 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 13:13:40.117699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 13:13:40.117706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 13:13:40.117714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128[2024-11-19 13:13:40.117721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 13:13:40.117730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.942 the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 13:13:40.117746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 13:13:40.117755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 13:13:40.117763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 13:13:40.117770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 13:13:40.117777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 13:13:40.117785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128[2024-11-19 13:13:40.117793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 13:13:40.117803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.942 the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 13:13:40.117819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 13:13:40.117827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 13:13:40.117834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 13:13:40.117842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with [2024-11-19 13:13:40.117849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:12the state(6) to be set 00:21:36.942 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 13:13:40.117858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d0c0 is same with the state(6) to be set 00:21:36.942 [2024-11-19 13:13:40.117860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 13:13:40.117869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 13:13:40.117876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 13:13:40.117884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 13:13:40.117890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 13:13:40.117898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 13:13:40.117905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 13:13:40.117914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 13:13:40.117920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 13:13:40.117928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 13:13:40.117936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 13:13:40.117944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 13:13:40.117959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 13:13:40.117968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 13:13:40.117975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 13:13:40.117985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 13:13:40.117992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 13:13:40.117999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 13:13:40.118006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 13:13:40.118014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 13:13:40.118021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 13:13:40.118440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.943 [2024-11-19 13:13:40.118854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.943 [2024-11-19 13:13:40.118883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.943 [2024-11-19 13:13:40.118893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.943 [2024-11-19 13:13:40.118900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.943 [2024-11-19 13:13:40.118906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.943 [2024-11-19 13:13:40.118913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.943 [2024-11-19 13:13:40.118921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.943 [2024-11-19 13:13:40.118929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.943 [2024-11-19 13:13:40.118935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.943 [2024-11-19 13:13:40.118943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.943 [2024-11-19 13:13:40.118955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.118962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.118968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.118974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.118988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.118995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d5b0 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.119997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.944 [2024-11-19 13:13:40.120163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.120366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d930 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.121209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8de00 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.121229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8de00 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.121236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8de00 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.121243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8de00 is same with the state(6) to be set 00:21:36.945 [2024-11-19 13:13:40.121411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.945 [2024-11-19 13:13:40.121433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.945 [2024-11-19 13:13:40.121446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.945 [2024-11-19 13:13:40.121454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.945 [2024-11-19 13:13:40.121463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.945 [2024-11-19 13:13:40.121470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.945 [2024-11-19 13:13:40.121479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.945 [2024-11-19 13:13:40.121485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.945 [2024-11-19 13:13:40.121494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.945 [2024-11-19 13:13:40.121501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.945 [2024-11-19 13:13:40.121509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.945 [2024-11-19 13:13:40.121516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.945 [2024-11-19 13:13:40.121524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.945 [2024-11-19 13:13:40.121531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.945 [2024-11-19 13:13:40.121539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.945 [2024-11-19 13:13:40.121546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.945 [2024-11-19 13:13:40.121555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.945 [2024-11-19 13:13:40.121561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.945 [2024-11-19 13:13:40.121570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.945 [2024-11-19 13:13:40.121584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.945 [2024-11-19 13:13:40.121593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.945 [2024-11-19 13:13:40.121600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.945 [2024-11-19 13:13:40.121609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.945 [2024-11-19 13:13:40.121616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.945 [2024-11-19 13:13:40.121624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.945 [2024-11-19 13:13:40.121631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.945 [2024-11-19 13:13:40.121639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.945 [2024-11-19 13:13:40.121645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.945 [2024-11-19 13:13:40.121655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.945 [2024-11-19 13:13:40.121664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.945 [2024-11-19 13:13:40.121673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.945 [2024-11-19 13:13:40.121679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.945 [2024-11-19 13:13:40.121687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.945 [2024-11-19 13:13:40.121694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.945 [2024-11-19 13:13:40.121702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.945 [2024-11-19 13:13:40.121712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.945 [2024-11-19 13:13:40.121721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.945 [2024-11-19 13:13:40.121727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.945 [2024-11-19 13:13:40.121735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.945 [2024-11-19 13:13:40.121742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.945 [2024-11-19 13:13:40.121750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.945 [2024-11-19 13:13:40.121757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.945 [2024-11-19 13:13:40.121765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.945 [2024-11-19 13:13:40.121772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.945 [2024-11-19 13:13:40.121781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.945 [2024-11-19 13:13:40.121788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.945 [2024-11-19 13:13:40.121796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.121802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.121811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.121817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.121826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.121832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.121840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.121847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.121855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.121862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.121870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.121877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.121884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.121891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.121899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.121906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.121914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.121921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.121929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.121936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.121944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.121958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.121966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.121974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.121982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.121989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.121997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.122004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.122012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.122019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.122027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.122033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.122042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.122048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.122056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.122063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.122070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.122077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.122085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.122091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.122099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.122106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.122114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.122120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.122128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.122134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.122142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.122151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.122159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.122165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.122173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.122180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.122187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.122194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.122203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.122209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.122212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with [2024-11-19 13:13:40.122217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1the state(6) to be set 00:21:36.946 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.122226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.122227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.946 [2024-11-19 13:13:40.122235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.122237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.946 [2024-11-19 13:13:40.122242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.122245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.946 [2024-11-19 13:13:40.122251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128[2024-11-19 13:13:40.122252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 the state(6) to be set 00:21:36.946 [2024-11-19 13:13:40.122261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 13:13:40.122262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 the state(6) to be set 00:21:36.946 [2024-11-19 13:13:40.122271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.946 [2024-11-19 13:13:40.122272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.122277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.946 [2024-11-19 13:13:40.122279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.122284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.946 [2024-11-19 13:13:40.122288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.122295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with [2024-11-19 13:13:40.122296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:36.946 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.946 [2024-11-19 13:13:40.122305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.946 [2024-11-19 13:13:40.122307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.946 [2024-11-19 13:13:40.122312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.946 [2024-11-19 13:13:40.122315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.947 [2024-11-19 13:13:40.122320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.947 [2024-11-19 13:13:40.122327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.947 [2024-11-19 13:13:40.122334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128[2024-11-19 13:13:40.122342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.947 the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 13:13:40.122351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.947 the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.947 [2024-11-19 13:13:40.122366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.947 [2024-11-19 13:13:40.122374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.947 [2024-11-19 13:13:40.122381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.947 [2024-11-19 13:13:40.122388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128[2024-11-19 13:13:40.122395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.947 the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.947 [2024-11-19 13:13:40.122408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:12[2024-11-19 13:13:40.122415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.947 the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 13:13:40.122426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.947 the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.947 [2024-11-19 13:13:40.122441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.947 [2024-11-19 13:13:40.122448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:36.947 [2024-11-19 13:13:40.122474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e2d0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.122709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:36.947 [2024-11-19 13:13:40.122738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15871b0 (9): Bad file descriptor 00:21:36.947 [2024-11-19 13:13:40.123449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.123466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.123472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.123479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.123485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.123494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.123501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.123507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.123513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.123519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.123526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.123532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.123539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.123545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.123552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.947 [2024-11-19 13:13:40.123559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.123565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.123571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.123578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.123586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.123592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.123599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.123605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.123611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.123617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.123623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.123629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.123636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.123643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.123649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.123655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.123662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.123701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.123759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.123811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.123869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.123920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.123981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.124007] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:36.948 [2024-11-19 13:13:40.124032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.124095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:36.948 [2024-11-19 13:13:40.124138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.124221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a8300 (9): Bad file descriptor 00:21:36.948 [2024-11-19 13:13:40.124243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.124352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.124404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.124455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.124508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.124565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.124577] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:36.948 [2024-11-19 13:13:40.124618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.124723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.124786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.124838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.124889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.124943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.125008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.125060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.125112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.125164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.125187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.948 [2024-11-19 13:13:40.125220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.125278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15871b0 with addr=10.0.0.2, port=4420 00:21:36.948 [2024-11-19 13:13:40.125334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.125387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15871b0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.125440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.125543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.125559] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:36.948 [2024-11-19 13:13:40.125595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.125702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.125755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8e7c0 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.126356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.948 [2024-11-19 13:13:40.126377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a8300 with addr=10.0.0.2, port=4420 00:21:36.948 [2024-11-19 13:13:40.126385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a8300 is same with the state(6) to be set 00:21:36.948 [2024-11-19 13:13:40.126396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15871b0 (9): Bad file descriptor 00:21:36.948 [2024-11-19 13:13:40.126464] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:36.948 [2024-11-19 13:13:40.126506] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:36.948 [2024-11-19 13:13:40.126657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a8300 (9): Bad file descriptor 00:21:36.948 [2024-11-19 13:13:40.126671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:36.948 [2024-11-19 13:13:40.126678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:36.948 [2024-11-19 13:13:40.126689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:36.948 [2024-11-19 13:13:40.126697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:36.948 [2024-11-19 13:13:40.126723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.948 [2024-11-19 13:13:40.126732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.948 [2024-11-19 13:13:40.126740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.948 [2024-11-19 13:13:40.126747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.948 [2024-11-19 13:13:40.126754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.948 [2024-11-19 13:13:40.126761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.948 [2024-11-19 13:13:40.126768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.948 [2024-11-19 13:13:40.126779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.948 [2024-11-19 13:13:40.126786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149b610 is same with the state(6) to be set 00:21:36.949 [2024-11-19 13:13:40.126874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fdc40 (9): Bad file descriptor 00:21:36.949 [2024-11-19 13:13:40.126905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.949 [2024-11-19 13:13:40.126923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.949 [2024-11-19 13:13:40.126983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.949 [2024-11-19 13:13:40.127037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.949 [2024-11-19 13:13:40.127090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.949 [2024-11-19 13:13:40.127142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.949 [2024-11-19 13:13:40.127194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.949 [2024-11-19 13:13:40.127249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.949 [2024-11-19 13:13:40.127307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c4590 is same with the state(6) to be set 00:21:36.949 [2024-11-19 13:13:40.127379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.949 [2024-11-19 13:13:40.127413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.949 [2024-11-19 13:13:40.127465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.949 [2024-11-19 13:13:40.127521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.949 [2024-11-19 13:13:40.127575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.949 [2024-11-19 13:13:40.127627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.949 [2024-11-19 13:13:40.127679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.949 [2024-11-19 13:13:40.127732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.949 [2024-11-19 13:13:40.127785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b24c0 is same with the state(6) to be set 00:21:36.949 [2024-11-19 13:13:40.127851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1586d50 (9): Bad file descriptor 00:21:36.949 [2024-11-19 13:13:40.127901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1584c70 (9): Bad file descriptor 00:21:36.949 [2024-11-19 13:13:40.127965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.949 [2024-11-19 13:13:40.128008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.949 [2024-11-19 13:13:40.128060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.949 [2024-11-19 13:13:40.128112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.949 [2024-11-19 13:13:40.128168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.949 [2024-11-19 13:13:40.128220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.949 [2024-11-19 13:13:40.128275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.949 [2024-11-19 13:13:40.128332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.949 [2024-11-19 13:13:40.128387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b17a0 is same with the state(6) to be set 00:21:36.949 [2024-11-19 13:13:40.128535] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:36.949 [2024-11-19 13:13:40.129091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:36.949 [2024-11-19 13:13:40.129105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:36.949 [2024-11-19 13:13:40.129113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:36.949 [2024-11-19 13:13:40.129120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:36.949 [2024-11-19 13:13:40.134412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:36.949 [2024-11-19 13:13:40.134683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.949 [2024-11-19 13:13:40.134697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15871b0 with addr=10.0.0.2, port=4420 00:21:36.949 [2024-11-19 13:13:40.134706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15871b0 is same with the state(6) to be set 00:21:36.949 [2024-11-19 13:13:40.134738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15871b0 (9): Bad file descriptor 00:21:36.949 [2024-11-19 13:13:40.134770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:36.949 [2024-11-19 13:13:40.134777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:36.949 [2024-11-19 13:13:40.134785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:36.949 [2024-11-19 13:13:40.134792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:36.949 [2024-11-19 13:13:40.135598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:36.949 [2024-11-19 13:13:40.135834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.949 [2024-11-19 13:13:40.135846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a8300 with addr=10.0.0.2, port=4420 00:21:36.949 [2024-11-19 13:13:40.135853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a8300 is same with the state(6) to be set 00:21:36.949 [2024-11-19 13:13:40.135884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a8300 (9): Bad file descriptor 00:21:36.949 [2024-11-19 13:13:40.135915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:36.949 [2024-11-19 13:13:40.135922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:36.949 [2024-11-19 13:13:40.135929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:36.949 [2024-11-19 13:13:40.135940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:36.949 [2024-11-19 13:13:40.136693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149b610 (9): Bad file descriptor 00:21:36.949 [2024-11-19 13:13:40.136725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c4590 (9): Bad file descriptor 00:21:36.949 [2024-11-19 13:13:40.136741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b24c0 (9): Bad file descriptor 00:21:36.949 [2024-11-19 13:13:40.136766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b17a0 (9): Bad file descriptor 00:21:36.949 [2024-11-19 13:13:40.136861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.949 [2024-11-19 13:13:40.136871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.949 [2024-11-19 13:13:40.136883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.949 [2024-11-19 13:13:40.136890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.949 [2024-11-19 13:13:40.136898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.949 [2024-11-19 13:13:40.136906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.949 [2024-11-19 13:13:40.136915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.949 [2024-11-19 13:13:40.136922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.949 [2024-11-19 13:13:40.136931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.949 [2024-11-19 13:13:40.136938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.949 [2024-11-19 13:13:40.136951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.949 [2024-11-19 13:13:40.136958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.949 [2024-11-19 13:13:40.136966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.949 [2024-11-19 13:13:40.136973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.949 [2024-11-19 13:13:40.136981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.949 [2024-11-19 13:13:40.136988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.949 [2024-11-19 13:13:40.136996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.949 [2024-11-19 13:13:40.137003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.949 [2024-11-19 13:13:40.137012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.949 [2024-11-19 13:13:40.137019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.949 [2024-11-19 13:13:40.137030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.949 [2024-11-19 13:13:40.137037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.949 [2024-11-19 13:13:40.137045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.949 [2024-11-19 13:13:40.137052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.949 [2024-11-19 13:13:40.137060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.950 [2024-11-19 13:13:40.137648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.950 [2024-11-19 13:13:40.137655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.137664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.137670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.137678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.137684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.137693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.137700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.137708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.137714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.137722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.137729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.137737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.137744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.137752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.137758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.137767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.137773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.137781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.137789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.137798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.137805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.137813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.137819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.137827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.137834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.137841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178c6a0 is same with the state(6) to be set 00:21:36.951 [2024-11-19 13:13:40.138851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.138862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.138873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.138880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.138889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.138896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.138904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.138911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.138920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.138927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.138935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.138942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.138953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.138960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.138969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.138976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.138984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.138996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.139005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.139011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.139020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.139027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.139036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.139042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.139051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.139057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.139066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.139072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.139080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.139087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.139095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.139102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.139109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.139116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.139124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.139130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.139139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.139145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.139154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.139160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.139168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.146451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.146465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.146473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.146481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.146488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.146496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.146503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.146511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.146518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.146526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.146532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.951 [2024-11-19 13:13:40.146541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.951 [2024-11-19 13:13:40.146547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.146989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.146996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.147004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.147011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.147019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.147027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.147035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.147042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.147050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.147057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.147065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.952 [2024-11-19 13:13:40.147072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.952 [2024-11-19 13:13:40.147080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.147086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.147095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.147101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.147109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c6e0 is same with the state(6) to be set 00:21:36.953 [2024-11-19 13:13:40.148120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28d6410 is same with the state(6) to be set 00:21:36.953 [2024-11-19 13:13:40.148251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.953 [2024-11-19 13:13:40.148769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.953 [2024-11-19 13:13:40.148775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.148784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.148790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.148798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.148805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.148813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.148819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.148828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.148834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.148843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.148849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.148857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.148864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.148872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.148881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.148889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.148896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.148904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.148911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.148919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.148926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.148934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.148940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.148955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.148962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.148970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.148976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.148985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.148992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.149000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.149006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.149015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.149021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.149029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.149036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.149044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.149051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.149059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.149065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.149075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.149082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.149090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.149097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.149105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.149112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.149120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.149126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.149135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.149141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.149149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.149156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.149164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.149170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.149179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.149185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.149194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.149200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.149208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.954 [2024-11-19 13:13:40.149215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.149222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1811f30 is same with the state(6) to be set 00:21:36.954 [2024-11-19 13:13:40.150545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:36.954 [2024-11-19 13:13:40.150562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:36.954 [2024-11-19 13:13:40.150579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:36.954 [2024-11-19 13:13:40.150670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.954 [2024-11-19 13:13:40.150683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.150696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.954 [2024-11-19 13:13:40.150705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.150715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.954 [2024-11-19 13:13:40.150724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.150734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.954 [2024-11-19 13:13:40.150743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.954 [2024-11-19 13:13:40.150752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f7140 is same with the state(6) to be set 00:21:36.954 [2024-11-19 13:13:40.150775] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:21:36.954 [2024-11-19 13:13:40.151807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:36.954 [2024-11-19 13:13:40.152070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.954 [2024-11-19 13:13:40.152088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1586d50 with addr=10.0.0.2, port=4420 00:21:36.954 [2024-11-19 13:13:40.152098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586d50 is same with the state(6) to be set 00:21:36.954 [2024-11-19 13:13:40.152329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.954 [2024-11-19 13:13:40.152343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1584c70 with addr=10.0.0.2, port=4420 00:21:36.954 [2024-11-19 13:13:40.152352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1584c70 is same with the state(6) to be set 00:21:36.954 [2024-11-19 13:13:40.152599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.954 [2024-11-19 13:13:40.152614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19fdc40 with addr=10.0.0.2, port=4420 00:21:36.955 [2024-11-19 13:13:40.152624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fdc40 is same with the state(6) to be set 00:21:36.955 [2024-11-19 13:13:40.153250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.153983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.153995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.154003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.154016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.154025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.154036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.154046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.955 [2024-11-19 13:13:40.154057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.955 [2024-11-19 13:13:40.154066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.154590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.154600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989a10 is same with the state(6) to be set 00:21:36.956 [2024-11-19 13:13:40.155971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.155992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.156006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.156016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.156027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.156040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.156052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.156061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.156073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.156082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.156094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.156103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.156115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.156124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.156135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.156145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.156156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.156166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.156177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.156186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.156198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.156207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.156218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.156227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.156239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.156249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.156260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.956 [2024-11-19 13:13:40.156270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.956 [2024-11-19 13:13:40.156281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.156982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.156991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.157003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.957 [2024-11-19 13:13:40.157012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.957 [2024-11-19 13:13:40.157023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.157033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.157044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.157054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.157065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.157074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.157086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.157096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.157108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.157118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.157130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.157139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.157150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.157159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.157171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.157179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.157191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.157200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.157211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.157220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.157231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.157240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.157252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.157261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.157273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.157281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.157293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.157302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.157314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.157323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.157333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198aed0 is same with the state(6) to be set 00:21:36.958 [2024-11-19 13:13:40.158678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.158699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.158718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.158728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.158739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.158748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.158760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.158769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.158781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.158790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.158801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.158810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.158822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.158831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.158842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.158851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.158862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.158872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.158884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.158894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.158906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.158915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.158927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.158936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.158952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.158962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.158973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.158984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.158996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.159005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.159017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.159026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.159037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.159046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.159059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.159068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.159079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.159089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.159100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.159109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.159121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.159130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.159141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.159151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.159162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.159172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.159183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.958 [2024-11-19 13:13:40.159192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.958 [2024-11-19 13:13:40.159203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.959 [2024-11-19 13:13:40.159852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.959 [2024-11-19 13:13:40.159860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.960 [2024-11-19 13:13:40.159868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d9d0 is same with the state(6) to be set 00:21:36.960 [2024-11-19 13:13:40.161081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:36.960 [2024-11-19 13:13:40.161101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:36.960 [2024-11-19 13:13:40.161110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:36.960 [2024-11-19 13:13:40.161119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:36.960 task offset: 31104 on job bdev=Nvme1n1 fails 00:21:36.960 00:21:36.960 Latency(us) 00:21:36.960 [2024-11-19T12:13:40.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.960 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.960 Job: Nvme1n1 ended in about 0.86 seconds with error 00:21:36.960 Verification LBA range: start 0x0 length 0x400 00:21:36.960 Nvme1n1 : 0.86 224.07 14.00 74.69 0.00 211682.42 3362.28 232510.33 00:21:36.960 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.960 Job: Nvme2n1 ended in about 0.87 seconds with error 00:21:36.960 Verification LBA range: start 0x0 length 0x400 00:21:36.960 Nvme2n1 : 0.87 219.51 13.72 73.17 0.00 212075.52 15956.59 222480.47 00:21:36.960 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.960 Job: Nvme3n1 ended in about 0.88 seconds with error 00:21:36.960 Verification LBA range: start 0x0 length 0x400 00:21:36.960 Nvme3n1 : 0.88 225.13 14.07 72.40 0.00 204745.56 10314.80 208803.39 00:21:36.960 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.960 Job: Nvme4n1 ended in about 0.89 seconds with error 00:21:36.960 Verification LBA range: start 0x0 length 0x400 00:21:36.960 Nvme4n1 : 0.89 215.37 13.46 71.79 0.00 208218.60 15044.79 216097.84 00:21:36.960 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.960 Job: Nvme5n1 ended in about 0.89 seconds with error 00:21:36.960 Verification LBA range: start 0x0 length 0x400 00:21:36.960 Nvme5n1 : 0.89 220.30 13.77 71.57 0.00 201033.62 16412.49 220656.86 00:21:36.960 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.960 Job: Nvme6n1 ended in about 0.86 seconds with error 00:21:36.960 Verification LBA range: start 0x0 length 0x400 00:21:36.960 Nvme6n1 : 0.86 223.31 13.96 74.44 0.00 192382.09 1795.12 225215.89 00:21:36.960 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.960 Job: Nvme7n1 ended in about 0.90 seconds with error 00:21:36.960 Verification LBA range: start 0x0 length 0x400 00:21:36.960 Nvme7n1 : 0.90 214.13 13.38 71.38 0.00 197635.34 17894.18 206979.78 00:21:36.960 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.960 Job: Nvme8n1 ended in about 0.89 seconds with error 00:21:36.960 Verification LBA range: start 0x0 length 0x400 00:21:36.960 Nvme8n1 : 0.89 214.10 13.38 2.25 0.00 248123.66 14189.97 228863.11 00:21:36.960 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.960 Verification LBA range: start 0x0 length 0x400 00:21:36.960 Nvme9n1 : 0.87 221.51 13.84 0.00 0.00 243037.05 34192.70 223392.28 00:21:36.960 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.960 Job: Nvme10n1 ended in about 0.89 seconds with error 00:21:36.960 Verification LBA range: start 0x0 length 0x400 00:21:36.960 Nvme10n1 : 0.89 144.45 9.03 72.22 0.00 244350.74 19147.91 240716.58 00:21:36.960 [2024-11-19T12:13:40.337Z] =================================================================================================================== 00:21:36.960 [2024-11-19T12:13:40.337Z] Total : 2121.87 132.62 583.91 0.00 213935.51 1795.12 240716.58 00:21:36.960 [2024-11-19 13:13:40.192203] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:36.960 [2024-11-19 13:13:40.192252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:36.960 [2024-11-19 13:13:40.192572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.960 [2024-11-19 13:13:40.192589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c4590 with addr=10.0.0.2, port=4420 00:21:36.960 [2024-11-19 13:13:40.192600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c4590 is same with the state(6) to be set 00:21:36.960 [2024-11-19 13:13:40.192615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1586d50 (9): Bad file descriptor 00:21:36.960 [2024-11-19 13:13:40.192626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1584c70 (9): Bad file descriptor 00:21:36.960 [2024-11-19 13:13:40.192636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fdc40 (9): Bad file descriptor 00:21:36.960 [2024-11-19 13:13:40.192666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f7140 (9): Bad file descriptor 00:21:36.960 [2024-11-19 13:13:40.192691] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:36.960 [2024-11-19 13:13:40.192703] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:21:36.960 [2024-11-19 13:13:40.192715] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:21:36.960 [2024-11-19 13:13:40.193512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.960 [2024-11-19 13:13:40.193534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15871b0 with addr=10.0.0.2, port=4420 00:21:36.960 [2024-11-19 13:13:40.193543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15871b0 is same with the state(6) to be set 00:21:36.960 [2024-11-19 13:13:40.193765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.960 [2024-11-19 13:13:40.193776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a8300 with addr=10.0.0.2, port=4420 00:21:36.960 [2024-11-19 13:13:40.193784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a8300 is same with the state(6) to be set 00:21:36.960 [2024-11-19 13:13:40.194020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.960 [2024-11-19 13:13:40.194031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b17a0 with addr=10.0.0.2, port=4420 00:21:36.960 [2024-11-19 13:13:40.194038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b17a0 is same with the state(6) to be set 00:21:36.960 [2024-11-19 13:13:40.194232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.960 [2024-11-19 13:13:40.194243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b24c0 with addr=10.0.0.2, port=4420 00:21:36.960 [2024-11-19 13:13:40.194250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b24c0 is same with the state(6) to be set 00:21:36.960 [2024-11-19 13:13:40.194468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.960 [2024-11-19 13:13:40.194478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x149b610 with addr=10.0.0.2, port=4420 00:21:36.960 [2024-11-19 13:13:40.194485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149b610 is same with the state(6) to be set 00:21:36.960 [2024-11-19 13:13:40.194496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c4590 (9): Bad file descriptor 00:21:36.960 [2024-11-19 13:13:40.194511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:36.960 [2024-11-19 13:13:40.194518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:36.960 [2024-11-19 13:13:40.194526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:36.960 [2024-11-19 13:13:40.194535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:36.960 [2024-11-19 13:13:40.194544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:36.960 [2024-11-19 13:13:40.194550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:36.960 [2024-11-19 13:13:40.194556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:36.960 [2024-11-19 13:13:40.194563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:36.960 [2024-11-19 13:13:40.194570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:36.960 [2024-11-19 13:13:40.194577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:36.960 [2024-11-19 13:13:40.194583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:36.960 [2024-11-19 13:13:40.194589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:36.960 [2024-11-19 13:13:40.194600] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:21:36.960 [2024-11-19 13:13:40.195432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15871b0 (9): Bad file descriptor 00:21:36.960 [2024-11-19 13:13:40.195452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a8300 (9): Bad file descriptor 00:21:36.960 [2024-11-19 13:13:40.195461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b17a0 (9): Bad file descriptor 00:21:36.960 [2024-11-19 13:13:40.195469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b24c0 (9): Bad file descriptor 00:21:36.960 [2024-11-19 13:13:40.195478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149b610 (9): Bad file descriptor 00:21:36.960 [2024-11-19 13:13:40.195486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:36.960 [2024-11-19 13:13:40.195492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:36.960 [2024-11-19 13:13:40.195499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:36.960 [2024-11-19 13:13:40.195505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:36.960 [2024-11-19 13:13:40.195557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:36.960 [2024-11-19 13:13:40.195569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:36.960 [2024-11-19 13:13:40.195577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:36.960 [2024-11-19 13:13:40.195585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:36.961 [2024-11-19 13:13:40.195615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:36.961 [2024-11-19 13:13:40.195622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:36.961 [2024-11-19 13:13:40.195629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:36.961 [2024-11-19 13:13:40.195638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:36.961 [2024-11-19 13:13:40.195645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:36.961 [2024-11-19 13:13:40.195651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:36.961 [2024-11-19 13:13:40.195657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:36.961 [2024-11-19 13:13:40.195663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:36.961 [2024-11-19 13:13:40.195670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:36.961 [2024-11-19 13:13:40.195675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:36.961 [2024-11-19 13:13:40.195682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:36.961 [2024-11-19 13:13:40.195688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:36.961 [2024-11-19 13:13:40.195694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:36.961 [2024-11-19 13:13:40.195700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:36.961 [2024-11-19 13:13:40.195706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:36.961 [2024-11-19 13:13:40.195711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:36.961 [2024-11-19 13:13:40.195718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:36.961 [2024-11-19 13:13:40.195724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:36.961 [2024-11-19 13:13:40.195730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:36.961 [2024-11-19 13:13:40.195736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:36.961 [2024-11-19 13:13:40.196453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.961 [2024-11-19 13:13:40.196473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f7140 with addr=10.0.0.2, port=4420 00:21:36.961 [2024-11-19 13:13:40.196482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f7140 is same with the state(6) to be set 00:21:36.961 [2024-11-19 13:13:40.196629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.961 [2024-11-19 13:13:40.196640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19fdc40 with addr=10.0.0.2, port=4420 00:21:36.961 [2024-11-19 13:13:40.196647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fdc40 is same with the state(6) to be set 00:21:36.961 [2024-11-19 13:13:40.196839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.961 [2024-11-19 13:13:40.196849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1584c70 with addr=10.0.0.2, port=4420 00:21:36.961 [2024-11-19 13:13:40.196856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1584c70 is same with the state(6) to be set 00:21:36.961 [2024-11-19 13:13:40.196955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.961 [2024-11-19 13:13:40.196966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1586d50 with addr=10.0.0.2, port=4420 00:21:36.961 [2024-11-19 13:13:40.196973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586d50 is same with the state(6) to be set 00:21:36.961 [2024-11-19 13:13:40.197006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f7140 (9): Bad file descriptor 00:21:36.961 [2024-11-19 13:13:40.197017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fdc40 (9): Bad file descriptor 00:21:36.961 [2024-11-19 13:13:40.197025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1584c70 (9): Bad file descriptor 00:21:36.961 [2024-11-19 13:13:40.197033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1586d50 (9): Bad file descriptor 00:21:36.961 [2024-11-19 13:13:40.197058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:36.961 [2024-11-19 13:13:40.197065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:36.961 [2024-11-19 13:13:40.197072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:36.961 [2024-11-19 13:13:40.197079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:36.961 [2024-11-19 13:13:40.197085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:36.961 [2024-11-19 13:13:40.197091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:36.961 [2024-11-19 13:13:40.197098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:36.961 [2024-11-19 13:13:40.197103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:36.961 [2024-11-19 13:13:40.197110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:36.961 [2024-11-19 13:13:40.197116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:36.961 [2024-11-19 13:13:40.197122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:36.961 [2024-11-19 13:13:40.197128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:36.961 [2024-11-19 13:13:40.197134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:36.961 [2024-11-19 13:13:40.197140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:36.961 [2024-11-19 13:13:40.197147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:36.961 [2024-11-19 13:13:40.197152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:37.220 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:38.598 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2903717 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2903717 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2903717 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:38.599 rmmod nvme_tcp 00:21:38.599 rmmod nvme_fabrics 00:21:38.599 rmmod nvme_keyring 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2903575 ']' 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2903575 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2903575 ']' 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2903575 00:21:38.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2903575) - No such process 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2903575 is not found' 00:21:38.599 Process with pid 2903575 is not found 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.599 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:40.507 00:21:40.507 real 0m7.120s 00:21:40.507 user 0m16.175s 00:21:40.507 sys 0m1.296s 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:40.507 ************************************ 00:21:40.507 END TEST nvmf_shutdown_tc3 00:21:40.507 ************************************ 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:40.507 ************************************ 00:21:40.507 START TEST nvmf_shutdown_tc4 00:21:40.507 ************************************ 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:40.507 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:40.507 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.507 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:40.508 Found net devices under 0000:86:00.0: cvl_0_0 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:40.508 Found net devices under 0000:86:00.1: cvl_0_1 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:40.508 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:40.768 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:40.768 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:40.768 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:40.768 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:40.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:21:40.768 00:21:40.768 --- 10.0.0.2 ping statistics --- 00:21:40.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.768 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:40.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:21:40.768 00:21:40.768 --- 10.0.0.1 ping statistics --- 00:21:40.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.768 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2904889 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2904889 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2904889 ']' 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.768 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:41.028 [2024-11-19 13:13:44.165746] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:21:41.028 [2024-11-19 13:13:44.165797] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.028 [2024-11-19 13:13:44.246310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:41.028 [2024-11-19 13:13:44.286424] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.028 [2024-11-19 13:13:44.286463] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.028 [2024-11-19 13:13:44.286471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.028 [2024-11-19 13:13:44.286478] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.028 [2024-11-19 13:13:44.286483] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.028 [2024-11-19 13:13:44.288168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.028 [2024-11-19 13:13:44.288280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.028 [2024-11-19 13:13:44.288370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.028 [2024-11-19 13:13:44.288371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:41.965 [2024-11-19 13:13:45.055238] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.965 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:41.965 Malloc1 00:21:41.965 [2024-11-19 13:13:45.175113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.965 Malloc2 00:21:41.965 Malloc3 00:21:41.965 Malloc4 00:21:41.965 Malloc5 00:21:42.224 Malloc6 00:21:42.224 Malloc7 00:21:42.224 Malloc8 00:21:42.224 Malloc9 00:21:42.224 Malloc10 00:21:42.224 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.224 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:42.224 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:42.224 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:42.482 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2905166 00:21:42.482 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:42.482 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:42.483 [2024-11-19 13:13:45.684959] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:47.773 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:47.773 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2904889 00:21:47.773 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2904889 ']' 00:21:47.773 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2904889 00:21:47.773 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:47.773 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.773 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2904889 00:21:47.773 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:47.773 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:47.773 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2904889' 00:21:47.773 killing process with pid 2904889 00:21:47.773 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2904889 00:21:47.773 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2904889 00:21:47.773 [2024-11-19 13:13:50.677364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6e680 is same with the state(6) to be set 00:21:47.773 [2024-11-19 13:13:50.677408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6e680 is same with the state(6) to be set 00:21:47.773 [2024-11-19 13:13:50.677416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6e680 is same with the state(6) to be set 00:21:47.773 [2024-11-19 13:13:50.677424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6e680 is same with the state(6) to be set 00:21:47.773 [2024-11-19 13:13:50.677430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6e680 is same with the state(6) to be set 00:21:47.773 Write completed with error (sct=0, sc=8) 00:21:47.773 Write completed with error (sct=0, sc=8) 00:21:47.773 starting I/O failed: -6 00:21:47.773 Write completed with error (sct=0, sc=8) 00:21:47.773 Write completed with error (sct=0, sc=8) 00:21:47.773 Write completed with error (sct=0, sc=8) 00:21:47.773 Write completed with error (sct=0, sc=8) 00:21:47.773 starting I/O failed: -6 00:21:47.773 Write completed with error (sct=0, sc=8) 00:21:47.773 Write completed with error (sct=0, sc=8) 00:21:47.773 Write completed with error (sct=0, sc=8) 00:21:47.773 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 [2024-11-19 13:13:50.677984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6eb70 is same with Write completed with error (sct=0, sc=8) 00:21:47.774 the state(6) to be set 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 [2024-11-19 13:13:50.678021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6eb70 is same with the state(6) to be set 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 [2024-11-19 13:13:50.678029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6eb70 is same with the state(6) to be set 00:21:47.774 [2024-11-19 13:13:50.678037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6eb70 is same with the state(6) to be set 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 [2024-11-19 13:13:50.678043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6eb70 is same with the state(6) to be set 00:21:47.774 [2024-11-19 13:13:50.678050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6eb70 is same with the state(6) to be set 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 [2024-11-19 13:13:50.678056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6eb70 is same with the state(6) to be set 00:21:47.774 [2024-11-19 13:13:50.678063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6eb70 is same with the state(6) to be set 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 [2024-11-19 13:13:50.678431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 [2024-11-19 13:13:50.679398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 [2024-11-19 13:13:50.679522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e70d80 is same with the state(6) to be set 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 [2024-11-19 13:13:50.679543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e70d80 is same with the state(6) to be set 00:21:47.774 starting I/O failed: -6 00:21:47.774 [2024-11-19 13:13:50.679550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e70d80 is same with the state(6) to be set 00:21:47.774 [2024-11-19 13:13:50.679557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e70d80 is same with the state(6) to be set 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 [2024-11-19 13:13:50.679564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e70d80 is same with the state(6) to be set 00:21:47.774 starting I/O failed: -6 00:21:47.774 [2024-11-19 13:13:50.679572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e70d80 is same with the state(6) to be set 00:21:47.774 [2024-11-19 13:13:50.679578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e70d80 is same with the state(6) to be set 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 [2024-11-19 13:13:50.679584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e70d80 is same with starting I/O failed: -6 00:21:47.774 the state(6) to be set 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 starting I/O failed: -6 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.774 [2024-11-19 13:13:50.679854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71250 is same with the state(6) to be set 00:21:47.774 starting I/O failed: -6 00:21:47.774 [2024-11-19 13:13:50.679872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71250 is same with the state(6) to be set 00:21:47.774 [2024-11-19 13:13:50.679879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71250 is same with Write completed with error (sct=0, sc=8) 00:21:47.774 the state(6) to be set 00:21:47.774 starting I/O failed: -6 00:21:47.774 [2024-11-19 13:13:50.679887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71250 is same with the state(6) to be set 00:21:47.774 [2024-11-19 13:13:50.679893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71250 is same with the state(6) to be set 00:21:47.774 [2024-11-19 13:13:50.679901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71250 is same with Write completed with error (sct=0, sc=8) 00:21:47.774 the state(6) to be set 00:21:47.774 [2024-11-19 13:13:50.679908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71250 is same with the state(6) to be set 00:21:47.774 [2024-11-19 13:13:50.679914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71250 is same with the state(6) to be set 00:21:47.774 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 [2024-11-19 13:13:50.680218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71740 is same with the state(6) to be set 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 [2024-11-19 13:13:50.680241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71740 is same with the state(6) to be set 00:21:47.775 starting I/O failed: -6 00:21:47.775 [2024-11-19 13:13:50.680249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71740 is same with the state(6) to be set 00:21:47.775 [2024-11-19 13:13:50.680256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71740 is same with the state(6) to be set 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 [2024-11-19 13:13:50.680262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71740 is same with starting I/O failed: -6 00:21:47.775 the state(6) to be set 00:21:47.775 [2024-11-19 13:13:50.680274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71740 is same with the state(6) to be set 00:21:47.775 [2024-11-19 13:13:50.680280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71740 is same with the state(6) to be set 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 [2024-11-19 13:13:50.680286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71740 is same with the state(6) to be set 00:21:47.775 [2024-11-19 13:13:50.680293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71740 is same with the state(6) to be set 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 [2024-11-19 13:13:50.680299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71740 is same with the state(6) to be set 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 [2024-11-19 13:13:50.680399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 [2024-11-19 13:13:50.680618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e708b0 is same with the state(6) to be set 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 [2024-11-19 13:13:50.680641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e708b0 is same with the state(6) to be set 00:21:47.775 [2024-11-19 13:13:50.680648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e708b0 is same with the state(6) to be set 00:21:47.775 [2024-11-19 13:13:50.680655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e708b0 is same with Write completed with error (sct=0, sc=8) 00:21:47.775 the state(6) to be set 00:21:47.775 starting I/O failed: -6 00:21:47.775 [2024-11-19 13:13:50.680663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e708b0 is same with the state(6) to be set 00:21:47.775 [2024-11-19 13:13:50.680670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e708b0 is same with the state(6) to be set 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 [2024-11-19 13:13:50.680677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e708b0 is same with the state(6) to be set 00:21:47.775 starting I/O failed: -6 00:21:47.775 [2024-11-19 13:13:50.680684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e708b0 is same with the state(6) to be set 00:21:47.775 [2024-11-19 13:13:50.680692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e708b0 is same with the state(6) to be set 00:21:47.775 [2024-11-19 13:13:50.680698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e708b0 is same with Write completed with error (sct=0, sc=8) 00:21:47.775 the state(6) to be set 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 Write completed with error (sct=0, sc=8) 00:21:47.775 starting I/O failed: -6 00:21:47.775 [2024-11-19 13:13:50.682202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:47.775 NVMe io qpair process completion error 00:21:47.776 [2024-11-19 13:13:50.683216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e720e0 is same with the state(6) to be set 00:21:47.776 [2024-11-19 13:13:50.683236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e720e0 is same with the state(6) to be set 00:21:47.776 [2024-11-19 13:13:50.683243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e720e0 is same with the state(6) to be set 00:21:47.776 [2024-11-19 13:13:50.683250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e720e0 is same with the state(6) to be set 00:21:47.776 [2024-11-19 13:13:50.683256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e720e0 is same with the state(6) to be set 00:21:47.776 [2024-11-19 13:13:50.683639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f60 is same with the state(6) to be set 00:21:47.776 [2024-11-19 13:13:50.683661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f60 is same with the state(6) to be set 00:21:47.776 [2024-11-19 13:13:50.683668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f60 is same with the state(6) to be set 00:21:47.776 [2024-11-19 13:13:50.683675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f60 is same with the state(6) to be set 00:21:47.776 [2024-11-19 13:13:50.683685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f60 is same with the state(6) to be set 00:21:47.776 [2024-11-19 13:13:50.683692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03f60 is same with the state(6) to be set 00:21:47.776 [2024-11-19 13:13:50.684367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04450 is same with the state(6) to be set 00:21:47.776 [2024-11-19 13:13:50.684389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04450 is same with the state(6) to be set 00:21:47.776 [2024-11-19 13:13:50.684396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04450 is same with the state(6) to be set 00:21:47.776 [2024-11-19 13:13:50.684403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04450 is same with the state(6) to be set 00:21:47.776 [2024-11-19 13:13:50.684410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04450 is same with the state(6) to be set 00:21:47.776 [2024-11-19 13:13:50.684416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04450 is same with the state(6) to be set 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 [2024-11-19 13:13:50.688679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 [2024-11-19 13:13:50.689511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f07320 is same with starting I/O failed: -6 00:21:47.776 the state(6) to be set 00:21:47.776 [2024-11-19 13:13:50.689534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f07320 is same with the state(6) to be set 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 [2024-11-19 13:13:50.689541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f07320 is same with the state(6) to be set 00:21:47.776 starting I/O failed: -6 00:21:47.776 [2024-11-19 13:13:50.689548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f07320 is same with the state(6) to be set 00:21:47.776 [2024-11-19 13:13:50.689555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f07320 is same with the state(6) to be set 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 [2024-11-19 13:13:50.689561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f07320 is same with the state(6) to be set 00:21:47.776 [2024-11-19 13:13:50.689567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f07320 is same with the state(6) to be set 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 [2024-11-19 13:13:50.689574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f07320 is same with the state(6) to be set 00:21:47.776 [2024-11-19 13:13:50.689580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f07320 is same with the state(6) to be set 00:21:47.776 [2024-11-19 13:13:50.689587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f07320 is same with the state(6) to be set 00:21:47.776 [2024-11-19 13:13:50.689594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.776 starting I/O failed: -6 00:21:47.776 Write completed with error (sct=0, sc=8) 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 [2024-11-19 13:13:50.689968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f077f0 is same with the state(6) to be set 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 [2024-11-19 13:13:50.689987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f077f0 is same with the state(6) to be set 00:21:47.777 [2024-11-19 13:13:50.689994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f077f0 is same with the state(6) to be set 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 [2024-11-19 13:13:50.690001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f077f0 is same with the state(6) to be set 00:21:47.777 starting I/O failed: -6 00:21:47.777 [2024-11-19 13:13:50.690008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f077f0 is same with the state(6) to be set 00:21:47.777 [2024-11-19 13:13:50.690015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f077f0 is same with the state(6) to be set 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 [2024-11-19 13:13:50.690021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f077f0 is same with the state(6) to be set 00:21:47.777 starting I/O failed: -6 00:21:47.777 [2024-11-19 13:13:50.690027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f077f0 is same with the state(6) to be set 00:21:47.777 [2024-11-19 13:13:50.690034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f077f0 is same with the state(6) to be set 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 [2024-11-19 13:13:50.690040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f077f0 is same with the state(6) to be set 00:21:47.777 starting I/O failed: -6 00:21:47.777 [2024-11-19 13:13:50.690047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f077f0 is same with the state(6) to be set 00:21:47.777 [2024-11-19 13:13:50.690054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f077f0 is same with the state(6) to be set 00:21:47.777 [2024-11-19 13:13:50.690060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f077f0 is same with the state(6) to be set 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 [2024-11-19 13:13:50.690336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f07cc0 is same with the state(6) to be set 00:21:47.777 [2024-11-19 13:13:50.690356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f07cc0 is same with the state(6) to be set 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 [2024-11-19 13:13:50.690363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f07cc0 is same with the state(6) to be set 00:21:47.777 [2024-11-19 13:13:50.690369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f07cc0 is same with the state(6) to be set 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 [2024-11-19 13:13:50.690376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f07cc0 is same with starting I/O failed: -6 00:21:47.777 the state(6) to be set 00:21:47.777 [2024-11-19 13:13:50.690388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f07cc0 is same with the state(6) to be set 00:21:47.777 [2024-11-19 13:13:50.690394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f07cc0 is same with the state(6) to be set 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 [2024-11-19 13:13:50.690400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f07cc0 is same with the state(6) to be set 00:21:47.777 starting I/O failed: -6 00:21:47.777 [2024-11-19 13:13:50.690407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f07cc0 is same with the state(6) to be set 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 [2024-11-19 13:13:50.690607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 [2024-11-19 13:13:50.690891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f06e50 is same with the state(6) to be set 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 [2024-11-19 13:13:50.690912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f06e50 is same with the state(6) to be set 00:21:47.777 [2024-11-19 13:13:50.690920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f06e50 is same with the state(6) to be set 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 [2024-11-19 13:13:50.690927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f06e50 is same with the state(6) to be set 00:21:47.777 starting I/O failed: -6 00:21:47.777 [2024-11-19 13:13:50.690934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f06e50 is same with the state(6) to be set 00:21:47.777 [2024-11-19 13:13:50.690940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f06e50 is same with the state(6) to be set 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 [2024-11-19 13:13:50.690953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f06e50 is same with starting I/O failed: -6 00:21:47.777 the state(6) to be set 00:21:47.777 [2024-11-19 13:13:50.690961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f06e50 is same with the state(6) to be set 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 [2024-11-19 13:13:50.690967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f06e50 is same with the state(6) to be set 00:21:47.777 starting I/O failed: -6 00:21:47.777 [2024-11-19 13:13:50.690975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f06e50 is same with the state(6) to be set 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.777 Write completed with error (sct=0, sc=8) 00:21:47.777 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 [2024-11-19 13:13:50.691483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05fe0 is same with the state(6) to be set 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 [2024-11-19 13:13:50.691495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05fe0 is same with starting I/O failed: -6 00:21:47.778 the state(6) to be set 00:21:47.778 [2024-11-19 13:13:50.691503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05fe0 is same with the state(6) to be set 00:21:47.778 [2024-11-19 13:13:50.691510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05fe0 is same with Write completed with error (sct=0, sc=8) 00:21:47.778 the state(6) to be set 00:21:47.778 starting I/O failed: -6 00:21:47.778 [2024-11-19 13:13:50.691517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05fe0 is same with the state(6) to be set 00:21:47.778 [2024-11-19 13:13:50.691524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05fe0 is same with the state(6) to be set 00:21:47.778 [2024-11-19 13:13:50.691530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05fe0 is same with the state(6) to be set 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 [2024-11-19 13:13:50.691535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05fe0 is same with the state(6) to be set 00:21:47.778 starting I/O failed: -6 00:21:47.778 [2024-11-19 13:13:50.691543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05fe0 is same with the state(6) to be set 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 [2024-11-19 13:13:50.692265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:47.778 NVMe io qpair process completion error 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 [2024-11-19 13:13:50.692721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05b10 is same with Write completed with error (sct=0, sc=8) 00:21:47.778 the state(6) to be set 00:21:47.778 [2024-11-19 13:13:50.692738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05b10 is same with the state(6) to be set 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 [2024-11-19 13:13:50.692745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05b10 is same with the state(6) to be set 00:21:47.778 [2024-11-19 13:13:50.692752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05b10 is same with the state(6) to be set 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 [2024-11-19 13:13:50.692759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05b10 is same with the state(6) to be set 00:21:47.778 [2024-11-19 13:13:50.692766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05b10 is same with the state(6) to be set 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 [2024-11-19 13:13:50.692772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05b10 is same with the state(6) to be set 00:21:47.778 starting I/O failed: -6 00:21:47.778 [2024-11-19 13:13:50.692779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05b10 is same with the state(6) to be set 00:21:47.778 [2024-11-19 13:13:50.692785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05b10 is same with the state(6) to be set 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 [2024-11-19 13:13:50.693252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 Write completed with error (sct=0, sc=8) 00:21:47.778 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 [2024-11-19 13:13:50.694156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 [2024-11-19 13:13:50.695203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.779 starting I/O failed: -6 00:21:47.779 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 [2024-11-19 13:13:50.696705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.780 NVMe io qpair process completion error 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 [2024-11-19 13:13:50.697924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 [2024-11-19 13:13:50.698773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.780 Write completed with error (sct=0, sc=8) 00:21:47.780 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 [2024-11-19 13:13:50.699814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 [2024-11-19 13:13:50.702013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.781 NVMe io qpair process completion error 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 [2024-11-19 13:13:50.703085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:47.781 starting I/O failed: -6 00:21:47.781 starting I/O failed: -6 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 starting I/O failed: -6 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.781 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 [2024-11-19 13:13:50.704040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 [2024-11-19 13:13:50.705053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.782 Write completed with error (sct=0, sc=8) 00:21:47.782 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 [2024-11-19 13:13:50.710027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.783 NVMe io qpair process completion error 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 [2024-11-19 13:13:50.710891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 [2024-11-19 13:13:50.711829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.783 Write completed with error (sct=0, sc=8) 00:21:47.783 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 [2024-11-19 13:13:50.712852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 [2024-11-19 13:13:50.716913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.784 NVMe io qpair process completion error 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.784 starting I/O failed: -6 00:21:47.784 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 [2024-11-19 13:13:50.717900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 [2024-11-19 13:13:50.718689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 [2024-11-19 13:13:50.719758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.785 Write completed with error (sct=0, sc=8) 00:21:47.785 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 [2024-11-19 13:13:50.721444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.786 NVMe io qpair process completion error 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 [2024-11-19 13:13:50.722485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 [2024-11-19 13:13:50.723426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 Write completed with error (sct=0, sc=8) 00:21:47.786 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 [2024-11-19 13:13:50.724440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 [2024-11-19 13:13:50.729153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:47.787 NVMe io qpair process completion error 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 starting I/O failed: -6 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.787 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 [2024-11-19 13:13:50.731152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 [2024-11-19 13:13:50.732192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.788 Write completed with error (sct=0, sc=8) 00:21:47.788 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 [2024-11-19 13:13:50.737764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:47.789 NVMe io qpair process completion error 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 [2024-11-19 13:13:50.738703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:47.789 starting I/O failed: -6 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 [2024-11-19 13:13:50.739618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:47.789 starting I/O failed: -6 00:21:47.789 starting I/O failed: -6 00:21:47.789 starting I/O failed: -6 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.789 starting I/O failed: -6 00:21:47.789 Write completed with error (sct=0, sc=8) 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 [2024-11-19 13:13:50.740843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 Write completed with error (sct=0, sc=8) 00:21:47.790 starting I/O failed: -6 00:21:47.790 [2024-11-19 13:13:50.743392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:47.790 NVMe io qpair process completion error 00:21:47.790 Initializing NVMe Controllers 00:21:47.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:47.790 Controller IO queue size 128, less than required. 00:21:47.790 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:47.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:47.790 Controller IO queue size 128, less than required. 00:21:47.790 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:47.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:47.790 Controller IO queue size 128, less than required. 00:21:47.790 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:47.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:47.790 Controller IO queue size 128, less than required. 00:21:47.790 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:47.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:47.790 Controller IO queue size 128, less than required. 00:21:47.790 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:47.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:47.790 Controller IO queue size 128, less than required. 00:21:47.790 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:47.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:47.790 Controller IO queue size 128, less than required. 00:21:47.791 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:47.791 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:47.791 Controller IO queue size 128, less than required. 00:21:47.791 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:47.791 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:47.791 Controller IO queue size 128, less than required. 00:21:47.791 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:47.791 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:47.791 Controller IO queue size 128, less than required. 00:21:47.791 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:47.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:47.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:47.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:47.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:47.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:47.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:47.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:47.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:47.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:47.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:47.791 Initialization complete. Launching workers. 00:21:47.791 ======================================================== 00:21:47.791 Latency(us) 00:21:47.791 Device Information : IOPS MiB/s Average min max 00:21:47.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2167.17 93.12 59067.40 906.52 110926.66 00:21:47.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2159.75 92.80 59283.23 851.78 109830.36 00:21:47.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2189.24 94.07 58504.48 677.36 107488.47 00:21:47.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2175.23 93.47 58933.19 711.85 105889.53 00:21:47.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2178.20 93.59 58890.95 783.93 116197.26 00:21:47.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2130.69 91.55 60217.83 851.10 118575.59 00:21:47.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2102.26 90.33 60415.03 1003.68 100605.84 00:21:47.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2112.23 90.76 60747.72 492.92 99270.66 00:21:47.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2148.29 92.31 59791.35 691.05 104361.02 00:21:47.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2127.29 91.41 59692.84 919.47 99446.12 00:21:47.791 ======================================================== 00:21:47.791 Total : 21490.36 923.41 59545.60 492.92 118575.59 00:21:47.791 00:21:47.791 [2024-11-19 13:13:50.747924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1171ef0 is same with the state(6) to be set 00:21:47.791 [2024-11-19 13:13:50.747974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172a70 is same with the state(6) to be set 00:21:47.791 [2024-11-19 13:13:50.748005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1173900 is same with the state(6) to be set 00:21:47.791 [2024-11-19 13:13:50.748033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1171bc0 is same with the state(6) to be set 00:21:47.791 [2024-11-19 13:13:50.748062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1173ae0 is same with the state(6) to be set 00:21:47.791 [2024-11-19 13:13:50.748090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1171890 is same with the state(6) to be set 00:21:47.791 [2024-11-19 13:13:50.748117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1173720 is same with the state(6) to be set 00:21:47.791 [2024-11-19 13:13:50.748144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172740 is same with the state(6) to be set 00:21:47.791 [2024-11-19 13:13:50.748172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172410 is same with the state(6) to be set 00:21:47.791 [2024-11-19 13:13:50.748201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1171560 is same with the state(6) to be set 00:21:47.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:47.791 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:48.729 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2905166 00:21:48.729 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:48.729 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2905166 00:21:48.729 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:48.729 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:48.729 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:48.729 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:48.729 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2905166 00:21:48.729 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:48.729 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:48.729 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:48.729 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:48.729 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:48.729 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:48.729 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:48.730 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:48.730 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:48.730 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:48.730 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:48.730 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:48.730 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:48.730 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:48.730 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:48.730 rmmod nvme_tcp 00:21:48.730 rmmod nvme_fabrics 00:21:48.989 rmmod nvme_keyring 00:21:48.989 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:48.989 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:48.989 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:48.989 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2904889 ']' 00:21:48.989 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2904889 00:21:48.989 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2904889 ']' 00:21:48.989 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2904889 00:21:48.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2904889) - No such process 00:21:48.989 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2904889 is not found' 00:21:48.989 Process with pid 2904889 is not found 00:21:48.989 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:48.989 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:48.989 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:48.989 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:48.989 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:48.989 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:48.989 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:48.989 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:48.989 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:48.989 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.989 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.989 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.902 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:50.902 00:21:50.902 real 0m10.434s 00:21:50.902 user 0m27.649s 00:21:50.902 sys 0m5.197s 00:21:50.902 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:50.902 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:50.902 ************************************ 00:21:50.902 END TEST nvmf_shutdown_tc4 00:21:50.902 ************************************ 00:21:50.902 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:50.902 00:21:50.902 real 0m40.757s 00:21:50.902 user 1m40.030s 00:21:50.902 sys 0m13.945s 00:21:50.902 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:50.902 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:50.902 ************************************ 00:21:50.902 END TEST nvmf_shutdown 00:21:50.902 ************************************ 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:51.162 ************************************ 00:21:51.162 START TEST nvmf_nsid 00:21:51.162 ************************************ 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:51.162 * Looking for test storage... 00:21:51.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:51.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.162 --rc genhtml_branch_coverage=1 00:21:51.162 --rc genhtml_function_coverage=1 00:21:51.162 --rc genhtml_legend=1 00:21:51.162 --rc geninfo_all_blocks=1 00:21:51.162 --rc geninfo_unexecuted_blocks=1 00:21:51.162 00:21:51.162 ' 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:51.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.162 --rc genhtml_branch_coverage=1 00:21:51.162 --rc genhtml_function_coverage=1 00:21:51.162 --rc genhtml_legend=1 00:21:51.162 --rc geninfo_all_blocks=1 00:21:51.162 --rc geninfo_unexecuted_blocks=1 00:21:51.162 00:21:51.162 ' 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:51.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.162 --rc genhtml_branch_coverage=1 00:21:51.162 --rc genhtml_function_coverage=1 00:21:51.162 --rc genhtml_legend=1 00:21:51.162 --rc geninfo_all_blocks=1 00:21:51.162 --rc geninfo_unexecuted_blocks=1 00:21:51.162 00:21:51.162 ' 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:51.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.162 --rc genhtml_branch_coverage=1 00:21:51.162 --rc genhtml_function_coverage=1 00:21:51.162 --rc genhtml_legend=1 00:21:51.162 --rc geninfo_all_blocks=1 00:21:51.162 --rc geninfo_unexecuted_blocks=1 00:21:51.162 00:21:51.162 ' 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.162 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:51.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:51.422 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:57.994 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:57.994 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:57.994 Found net devices under 0000:86:00.0: cvl_0_0 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:57.994 Found net devices under 0000:86:00.1: cvl_0_1 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:57.994 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:57.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:57.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:21:57.995 00:21:57.995 --- 10.0.0.2 ping statistics --- 00:21:57.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.995 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:57.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:57.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:21:57.995 00:21:57.995 --- 10.0.0.1 ping statistics --- 00:21:57.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.995 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2909634 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2909634 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2909634 ']' 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:57.995 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:57.995 [2024-11-19 13:14:00.484617] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:21:57.995 [2024-11-19 13:14:00.484661] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.995 [2024-11-19 13:14:00.564868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.995 [2024-11-19 13:14:00.607105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.995 [2024-11-19 13:14:00.607137] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.995 [2024-11-19 13:14:00.607144] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.995 [2024-11-19 13:14:00.607150] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.995 [2024-11-19 13:14:00.607155] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.995 [2024-11-19 13:14:00.607704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2909878 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=d8e52114-1e95-434f-a303-40a6c70ef201 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=fcafadcc-6def-49ab-9b93-12f8d82fafae 00:21:57.995 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:58.255 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=02d117cf-d340-435b-8f59-e13aef0b3493 00:21:58.255 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:58.255 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.255 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:58.255 null0 00:21:58.255 null1 00:21:58.255 null2 00:21:58.255 [2024-11-19 13:14:01.398960] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:21:58.255 [2024-11-19 13:14:01.399007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2909878 ] 00:21:58.255 [2024-11-19 13:14:01.401439] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.255 [2024-11-19 13:14:01.425606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.255 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.255 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2909878 /var/tmp/tgt2.sock 00:21:58.255 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2909878 ']' 00:21:58.255 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:58.255 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.255 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:58.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:58.255 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.255 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:58.255 [2024-11-19 13:14:01.475593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.255 [2024-11-19 13:14:01.516801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.514 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:58.514 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:58.514 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:58.774 [2024-11-19 13:14:02.038676] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.774 [2024-11-19 13:14:02.054785] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:58.774 nvme0n1 nvme0n2 00:21:58.774 nvme1n1 00:21:58.774 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:58.774 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:58.774 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:00.149 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:00.149 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:00.149 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:00.149 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:00.149 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:00.149 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:00.149 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:00.149 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:00.149 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:00.149 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:00.149 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:22:00.149 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:22:00.149 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid d8e52114-1e95-434f-a303-40a6c70ef201 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d8e521141e95434fa30340a6c70ef201 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D8E521141E95434FA30340A6C70EF201 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ D8E521141E95434FA30340A6C70EF201 == \D\8\E\5\2\1\1\4\1\E\9\5\4\3\4\F\A\3\0\3\4\0\A\6\C\7\0\E\F\2\0\1 ]] 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid fcafadcc-6def-49ab-9b93-12f8d82fafae 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=fcafadcc6def49ab9b9312f8d82fafae 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FCAFADCC6DEF49AB9B9312F8D82FAFAE 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ FCAFADCC6DEF49AB9B9312F8D82FAFAE == \F\C\A\F\A\D\C\C\6\D\E\F\4\9\A\B\9\B\9\3\1\2\F\8\D\8\2\F\A\F\A\E ]] 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 02d117cf-d340-435b-8f59-e13aef0b3493 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=02d117cfd340435b8f59e13aef0b3493 00:22:01.086 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 02D117CFD340435B8F59E13AEF0B3493 00:22:01.087 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 02D117CFD340435B8F59E13AEF0B3493 == \0\2\D\1\1\7\C\F\D\3\4\0\4\3\5\B\8\F\5\9\E\1\3\A\E\F\0\B\3\4\9\3 ]] 00:22:01.087 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:01.346 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:01.346 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:01.346 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2909878 00:22:01.346 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2909878 ']' 00:22:01.346 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2909878 00:22:01.346 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:01.346 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:01.346 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2909878 00:22:01.346 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:01.346 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:01.346 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2909878' 00:22:01.346 killing process with pid 2909878 00:22:01.346 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2909878 00:22:01.346 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2909878 00:22:01.606 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:01.606 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:01.606 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:01.606 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:01.606 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:01.606 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:01.606 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:01.606 rmmod nvme_tcp 00:22:01.606 rmmod nvme_fabrics 00:22:01.606 rmmod nvme_keyring 00:22:01.865 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:01.865 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:01.865 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:01.865 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2909634 ']' 00:22:01.865 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2909634 00:22:01.865 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2909634 ']' 00:22:01.865 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2909634 00:22:01.865 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:01.865 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:01.865 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2909634 00:22:01.865 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:01.865 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:01.865 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2909634' 00:22:01.865 killing process with pid 2909634 00:22:01.865 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2909634 00:22:01.865 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2909634 00:22:01.865 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:01.865 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:01.865 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:01.865 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:01.865 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:01.865 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:01.865 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:01.865 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:01.865 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:01.865 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.865 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.865 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.403 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:04.403 00:22:04.403 real 0m12.947s 00:22:04.403 user 0m10.486s 00:22:04.403 sys 0m5.430s 00:22:04.403 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:04.403 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:04.403 ************************************ 00:22:04.403 END TEST nvmf_nsid 00:22:04.403 ************************************ 00:22:04.403 13:14:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:04.403 00:22:04.403 real 12m1.175s 00:22:04.403 user 25m45.038s 00:22:04.403 sys 3m43.542s 00:22:04.403 13:14:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:04.403 13:14:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:04.403 ************************************ 00:22:04.403 END TEST nvmf_target_extra 00:22:04.403 ************************************ 00:22:04.403 13:14:07 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:04.403 13:14:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:04.403 13:14:07 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:04.403 13:14:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:04.403 ************************************ 00:22:04.403 START TEST nvmf_host 00:22:04.403 ************************************ 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:04.403 * Looking for test storage... 00:22:04.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:04.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.403 --rc genhtml_branch_coverage=1 00:22:04.403 --rc genhtml_function_coverage=1 00:22:04.403 --rc genhtml_legend=1 00:22:04.403 --rc geninfo_all_blocks=1 00:22:04.403 --rc geninfo_unexecuted_blocks=1 00:22:04.403 00:22:04.403 ' 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:04.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.403 --rc genhtml_branch_coverage=1 00:22:04.403 --rc genhtml_function_coverage=1 00:22:04.403 --rc genhtml_legend=1 00:22:04.403 --rc geninfo_all_blocks=1 00:22:04.403 --rc geninfo_unexecuted_blocks=1 00:22:04.403 00:22:04.403 ' 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:04.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.403 --rc genhtml_branch_coverage=1 00:22:04.403 --rc genhtml_function_coverage=1 00:22:04.403 --rc genhtml_legend=1 00:22:04.403 --rc geninfo_all_blocks=1 00:22:04.403 --rc geninfo_unexecuted_blocks=1 00:22:04.403 00:22:04.403 ' 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:04.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.403 --rc genhtml_branch_coverage=1 00:22:04.403 --rc genhtml_function_coverage=1 00:22:04.403 --rc genhtml_legend=1 00:22:04.403 --rc geninfo_all_blocks=1 00:22:04.403 --rc geninfo_unexecuted_blocks=1 00:22:04.403 00:22:04.403 ' 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:04.403 13:14:07 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:04.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.404 ************************************ 00:22:04.404 START TEST nvmf_multicontroller 00:22:04.404 ************************************ 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:04.404 * Looking for test storage... 00:22:04.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:22:04.404 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:04.664 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:04.664 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:04.664 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:04.664 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:04.664 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:04.664 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:04.664 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:04.664 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:04.664 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:04.664 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:04.664 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:04.664 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:04.664 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:04.664 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:04.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.665 --rc genhtml_branch_coverage=1 00:22:04.665 --rc genhtml_function_coverage=1 00:22:04.665 --rc genhtml_legend=1 00:22:04.665 --rc geninfo_all_blocks=1 00:22:04.665 --rc geninfo_unexecuted_blocks=1 00:22:04.665 00:22:04.665 ' 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:04.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.665 --rc genhtml_branch_coverage=1 00:22:04.665 --rc genhtml_function_coverage=1 00:22:04.665 --rc genhtml_legend=1 00:22:04.665 --rc geninfo_all_blocks=1 00:22:04.665 --rc geninfo_unexecuted_blocks=1 00:22:04.665 00:22:04.665 ' 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:04.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.665 --rc genhtml_branch_coverage=1 00:22:04.665 --rc genhtml_function_coverage=1 00:22:04.665 --rc genhtml_legend=1 00:22:04.665 --rc geninfo_all_blocks=1 00:22:04.665 --rc geninfo_unexecuted_blocks=1 00:22:04.665 00:22:04.665 ' 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:04.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.665 --rc genhtml_branch_coverage=1 00:22:04.665 --rc genhtml_function_coverage=1 00:22:04.665 --rc genhtml_legend=1 00:22:04.665 --rc geninfo_all_blocks=1 00:22:04.665 --rc geninfo_unexecuted_blocks=1 00:22:04.665 00:22:04.665 ' 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:04.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:04.665 13:14:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:11.247 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:11.247 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:11.247 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:11.248 Found net devices under 0000:86:00.0: cvl_0_0 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:11.248 Found net devices under 0000:86:00.1: cvl_0_1 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:11.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:22:11.248 00:22:11.248 --- 10.0.0.2 ping statistics --- 00:22:11.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.248 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:22:11.248 00:22:11.248 --- 10.0.0.1 ping statistics --- 00:22:11.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.248 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2914103 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2914103 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2914103 ']' 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:11.248 13:14:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.248 [2024-11-19 13:14:13.835334] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:22:11.248 [2024-11-19 13:14:13.835384] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.248 [2024-11-19 13:14:13.915413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:11.248 [2024-11-19 13:14:13.957059] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.248 [2024-11-19 13:14:13.957096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.248 [2024-11-19 13:14:13.957102] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.248 [2024-11-19 13:14:13.957109] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.248 [2024-11-19 13:14:13.957114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.248 [2024-11-19 13:14:13.958570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.248 [2024-11-19 13:14:13.958679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.248 [2024-11-19 13:14:13.958680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:11.248 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:11.248 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:11.248 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:11.248 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:11.248 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.248 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.248 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:11.248 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.248 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.248 [2024-11-19 13:14:14.102565] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.248 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.248 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:11.248 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.248 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.248 Malloc0 00:22:11.248 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.248 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:11.248 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.248 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.248 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.248 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:11.248 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.249 [2024-11-19 13:14:14.170897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.249 [2024-11-19 13:14:14.178810] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.249 Malloc1 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2914203 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2914203 /var/tmp/bdevperf.sock 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2914203 ']' 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:11.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.249 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.509 NVMe0n1 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.509 1 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.509 request: 00:22:11.509 { 00:22:11.509 "name": "NVMe0", 00:22:11.509 "trtype": "tcp", 00:22:11.509 "traddr": "10.0.0.2", 00:22:11.509 "adrfam": "ipv4", 00:22:11.509 "trsvcid": "4420", 00:22:11.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:11.509 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:11.509 "hostaddr": "10.0.0.1", 00:22:11.509 "prchk_reftag": false, 00:22:11.509 "prchk_guard": false, 00:22:11.509 "hdgst": false, 00:22:11.509 "ddgst": false, 00:22:11.509 "allow_unrecognized_csi": false, 00:22:11.509 "method": "bdev_nvme_attach_controller", 00:22:11.509 "req_id": 1 00:22:11.509 } 00:22:11.509 Got JSON-RPC error response 00:22:11.509 response: 00:22:11.509 { 00:22:11.509 "code": -114, 00:22:11.509 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:11.509 } 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.509 request: 00:22:11.509 { 00:22:11.509 "name": "NVMe0", 00:22:11.509 "trtype": "tcp", 00:22:11.509 "traddr": "10.0.0.2", 00:22:11.509 "adrfam": "ipv4", 00:22:11.509 "trsvcid": "4420", 00:22:11.509 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:11.509 "hostaddr": "10.0.0.1", 00:22:11.509 "prchk_reftag": false, 00:22:11.509 "prchk_guard": false, 00:22:11.509 "hdgst": false, 00:22:11.509 "ddgst": false, 00:22:11.509 "allow_unrecognized_csi": false, 00:22:11.509 "method": "bdev_nvme_attach_controller", 00:22:11.509 "req_id": 1 00:22:11.509 } 00:22:11.509 Got JSON-RPC error response 00:22:11.509 response: 00:22:11.509 { 00:22:11.509 "code": -114, 00:22:11.509 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:11.509 } 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.509 request: 00:22:11.509 { 00:22:11.509 "name": "NVMe0", 00:22:11.509 "trtype": "tcp", 00:22:11.509 "traddr": "10.0.0.2", 00:22:11.509 "adrfam": "ipv4", 00:22:11.509 "trsvcid": "4420", 00:22:11.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:11.509 "hostaddr": "10.0.0.1", 00:22:11.509 "prchk_reftag": false, 00:22:11.509 "prchk_guard": false, 00:22:11.509 "hdgst": false, 00:22:11.509 "ddgst": false, 00:22:11.509 "multipath": "disable", 00:22:11.509 "allow_unrecognized_csi": false, 00:22:11.509 "method": "bdev_nvme_attach_controller", 00:22:11.509 "req_id": 1 00:22:11.509 } 00:22:11.509 Got JSON-RPC error response 00:22:11.509 response: 00:22:11.509 { 00:22:11.509 "code": -114, 00:22:11.509 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:11.509 } 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:11.509 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:11.510 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:11.510 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:11.510 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:11.510 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:11.510 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:11.510 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:11.510 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.510 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.510 request: 00:22:11.510 { 00:22:11.510 "name": "NVMe0", 00:22:11.510 "trtype": "tcp", 00:22:11.510 "traddr": "10.0.0.2", 00:22:11.510 "adrfam": "ipv4", 00:22:11.510 "trsvcid": "4420", 00:22:11.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:11.510 "hostaddr": "10.0.0.1", 00:22:11.510 "prchk_reftag": false, 00:22:11.510 "prchk_guard": false, 00:22:11.510 "hdgst": false, 00:22:11.510 "ddgst": false, 00:22:11.510 "multipath": "failover", 00:22:11.510 "allow_unrecognized_csi": false, 00:22:11.510 "method": "bdev_nvme_attach_controller", 00:22:11.510 "req_id": 1 00:22:11.510 } 00:22:11.510 Got JSON-RPC error response 00:22:11.510 response: 00:22:11.510 { 00:22:11.510 "code": -114, 00:22:11.510 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:11.510 } 00:22:11.510 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:11.510 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:11.510 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:11.510 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:11.510 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:11.510 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:11.510 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.510 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.769 NVMe0n1 00:22:11.769 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.769 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:11.769 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.769 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.769 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.769 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:11.769 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.769 13:14:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.769 00:22:11.769 13:14:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.769 13:14:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:11.769 13:14:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:11.769 13:14:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.769 13:14:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:11.769 13:14:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.769 13:14:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:11.769 13:14:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:13.148 { 00:22:13.148 "results": [ 00:22:13.148 { 00:22:13.148 "job": "NVMe0n1", 00:22:13.148 "core_mask": "0x1", 00:22:13.148 "workload": "write", 00:22:13.148 "status": "finished", 00:22:13.148 "queue_depth": 128, 00:22:13.148 "io_size": 4096, 00:22:13.148 "runtime": 1.008008, 00:22:13.148 "iops": 24457.147165498685, 00:22:13.148 "mibps": 95.53573111522924, 00:22:13.148 "io_failed": 0, 00:22:13.148 "io_timeout": 0, 00:22:13.148 "avg_latency_us": 5227.291738724804, 00:22:13.148 "min_latency_us": 3134.330434782609, 00:22:13.148 "max_latency_us": 11967.44347826087 00:22:13.148 } 00:22:13.148 ], 00:22:13.148 "core_count": 1 00:22:13.148 } 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2914203 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2914203 ']' 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2914203 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2914203 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2914203' 00:22:13.148 killing process with pid 2914203 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2914203 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2914203 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:22:13.148 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:13.148 [2024-11-19 13:14:14.281208] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:22:13.148 [2024-11-19 13:14:14.281260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2914203 ] 00:22:13.148 [2024-11-19 13:14:14.357643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.148 [2024-11-19 13:14:14.401597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.148 [2024-11-19 13:14:15.064420] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name 5d4838ba-d8f1-4915-bbd1-a17d11002daa already exists 00:22:13.148 [2024-11-19 13:14:15.064450] bdev.c:7838:bdev_register: *ERROR*: Unable to add uuid:5d4838ba-d8f1-4915-bbd1-a17d11002daa alias for bdev NVMe1n1 00:22:13.148 [2024-11-19 13:14:15.064458] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:13.148 Running I/O for 1 seconds... 00:22:13.148 24398.00 IOPS, 95.30 MiB/s 00:22:13.148 Latency(us) 00:22:13.148 [2024-11-19T12:14:16.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.148 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:13.148 NVMe0n1 : 1.01 24457.15 95.54 0.00 0.00 5227.29 3134.33 11967.44 00:22:13.148 [2024-11-19T12:14:16.525Z] =================================================================================================================== 00:22:13.148 [2024-11-19T12:14:16.525Z] Total : 24457.15 95.54 0.00 0.00 5227.29 3134.33 11967.44 00:22:13.148 Received shutdown signal, test time was about 1.000000 seconds 00:22:13.148 00:22:13.148 Latency(us) 00:22:13.148 [2024-11-19T12:14:16.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.148 [2024-11-19T12:14:16.525Z] =================================================================================================================== 00:22:13.148 [2024-11-19T12:14:16.525Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:13.148 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:13.148 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:13.149 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:13.149 rmmod nvme_tcp 00:22:13.149 rmmod nvme_fabrics 00:22:13.149 rmmod nvme_keyring 00:22:13.408 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:13.408 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:13.408 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:13.408 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2914103 ']' 00:22:13.408 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2914103 00:22:13.408 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2914103 ']' 00:22:13.408 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2914103 00:22:13.408 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:13.408 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:13.408 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2914103 00:22:13.408 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:13.408 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:13.408 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2914103' 00:22:13.408 killing process with pid 2914103 00:22:13.408 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2914103 00:22:13.408 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2914103 00:22:13.668 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:13.668 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:13.668 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:13.668 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:13.668 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:22:13.668 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:13.668 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:22:13.668 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:13.668 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:13.668 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.668 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.668 13:14:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.574 13:14:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:15.574 00:22:15.574 real 0m11.239s 00:22:15.574 user 0m12.579s 00:22:15.574 sys 0m5.170s 00:22:15.574 13:14:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:15.574 13:14:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:15.574 ************************************ 00:22:15.574 END TEST nvmf_multicontroller 00:22:15.574 ************************************ 00:22:15.574 13:14:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:15.574 13:14:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:15.574 13:14:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:15.574 13:14:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.834 ************************************ 00:22:15.834 START TEST nvmf_aer 00:22:15.834 ************************************ 00:22:15.834 13:14:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:15.834 * Looking for test storage... 00:22:15.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:15.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.835 --rc genhtml_branch_coverage=1 00:22:15.835 --rc genhtml_function_coverage=1 00:22:15.835 --rc genhtml_legend=1 00:22:15.835 --rc geninfo_all_blocks=1 00:22:15.835 --rc geninfo_unexecuted_blocks=1 00:22:15.835 00:22:15.835 ' 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:15.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.835 --rc genhtml_branch_coverage=1 00:22:15.835 --rc genhtml_function_coverage=1 00:22:15.835 --rc genhtml_legend=1 00:22:15.835 --rc geninfo_all_blocks=1 00:22:15.835 --rc geninfo_unexecuted_blocks=1 00:22:15.835 00:22:15.835 ' 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:15.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.835 --rc genhtml_branch_coverage=1 00:22:15.835 --rc genhtml_function_coverage=1 00:22:15.835 --rc genhtml_legend=1 00:22:15.835 --rc geninfo_all_blocks=1 00:22:15.835 --rc geninfo_unexecuted_blocks=1 00:22:15.835 00:22:15.835 ' 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:15.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.835 --rc genhtml_branch_coverage=1 00:22:15.835 --rc genhtml_function_coverage=1 00:22:15.835 --rc genhtml_legend=1 00:22:15.835 --rc geninfo_all_blocks=1 00:22:15.835 --rc geninfo_unexecuted_blocks=1 00:22:15.835 00:22:15.835 ' 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:15.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.835 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:15.836 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:15.836 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:15.836 13:14:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:22.405 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:22.405 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:22.405 Found net devices under 0000:86:00.0: cvl_0_0 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:22.405 Found net devices under 0000:86:00.1: cvl_0_1 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:22.405 13:14:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:22.405 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:22.405 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:22.405 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:22.405 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:22.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:22:22.406 00:22:22.406 --- 10.0.0.2 ping statistics --- 00:22:22.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.406 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:22:22.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:22.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:22:22.406 00:22:22.406 --- 10.0.0.1 ping statistics --- 00:22:22.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.406 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:22:22.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:22:22.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:22.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:22.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:22.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:22.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:22.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:22.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:22.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:22.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:22.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2918030 00:22:22.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2918030 00:22:22.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:22.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2918030 ']' 00:22:22.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:22.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:22.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:22.406 [2024-11-19 13:14:25.165324] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:22:22.406 [2024-11-19 13:14:25.165374] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.406 [2024-11-19 13:14:25.247334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:22.406 [2024-11-19 13:14:25.290511] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.406 [2024-11-19 13:14:25.290548] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.406 [2024-11-19 13:14:25.290556] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.406 [2024-11-19 13:14:25.290562] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.406 [2024-11-19 13:14:25.290567] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.406 [2024-11-19 13:14:25.292026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.406 [2024-11-19 13:14:25.292129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:22.406 [2024-11-19 13:14:25.292130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.406 [2024-11-19 13:14:25.292046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.665 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:22.665 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:22:22.665 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:22.665 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:22.665 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:22.924 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.924 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:22.925 [2024-11-19 13:14:26.072045] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:22.925 Malloc0 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:22.925 [2024-11-19 13:14:26.130873] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:22.925 [ 00:22:22.925 { 00:22:22.925 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:22.925 "subtype": "Discovery", 00:22:22.925 "listen_addresses": [], 00:22:22.925 "allow_any_host": true, 00:22:22.925 "hosts": [] 00:22:22.925 }, 00:22:22.925 { 00:22:22.925 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.925 "subtype": "NVMe", 00:22:22.925 "listen_addresses": [ 00:22:22.925 { 00:22:22.925 "trtype": "TCP", 00:22:22.925 "adrfam": "IPv4", 00:22:22.925 "traddr": "10.0.0.2", 00:22:22.925 "trsvcid": "4420" 00:22:22.925 } 00:22:22.925 ], 00:22:22.925 "allow_any_host": true, 00:22:22.925 "hosts": [], 00:22:22.925 "serial_number": "SPDK00000000000001", 00:22:22.925 "model_number": "SPDK bdev Controller", 00:22:22.925 "max_namespaces": 2, 00:22:22.925 "min_cntlid": 1, 00:22:22.925 "max_cntlid": 65519, 00:22:22.925 "namespaces": [ 00:22:22.925 { 00:22:22.925 "nsid": 1, 00:22:22.925 "bdev_name": "Malloc0", 00:22:22.925 "name": "Malloc0", 00:22:22.925 "nguid": "55C1084ECA03461FA7C4AC63BF65D4D7", 00:22:22.925 "uuid": "55c1084e-ca03-461f-a7c4-ac63bf65d4d7" 00:22:22.925 } 00:22:22.925 ] 00:22:22.925 } 00:22:22.925 ] 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2918233 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:22:22.925 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:23.204 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:23.205 Malloc1 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:23.205 Asynchronous Event Request test 00:22:23.205 Attaching to 10.0.0.2 00:22:23.205 Attached to 10.0.0.2 00:22:23.205 Registering asynchronous event callbacks... 00:22:23.205 Starting namespace attribute notice tests for all controllers... 00:22:23.205 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:23.205 aer_cb - Changed Namespace 00:22:23.205 Cleaning up... 00:22:23.205 [ 00:22:23.205 { 00:22:23.205 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:23.205 "subtype": "Discovery", 00:22:23.205 "listen_addresses": [], 00:22:23.205 "allow_any_host": true, 00:22:23.205 "hosts": [] 00:22:23.205 }, 00:22:23.205 { 00:22:23.205 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.205 "subtype": "NVMe", 00:22:23.205 "listen_addresses": [ 00:22:23.205 { 00:22:23.205 "trtype": "TCP", 00:22:23.205 "adrfam": "IPv4", 00:22:23.205 "traddr": "10.0.0.2", 00:22:23.205 "trsvcid": "4420" 00:22:23.205 } 00:22:23.205 ], 00:22:23.205 "allow_any_host": true, 00:22:23.205 "hosts": [], 00:22:23.205 "serial_number": "SPDK00000000000001", 00:22:23.205 "model_number": "SPDK bdev Controller", 00:22:23.205 "max_namespaces": 2, 00:22:23.205 "min_cntlid": 1, 00:22:23.205 "max_cntlid": 65519, 00:22:23.205 "namespaces": [ 00:22:23.205 { 00:22:23.205 "nsid": 1, 00:22:23.205 "bdev_name": "Malloc0", 00:22:23.205 "name": "Malloc0", 00:22:23.205 "nguid": "55C1084ECA03461FA7C4AC63BF65D4D7", 00:22:23.205 "uuid": "55c1084e-ca03-461f-a7c4-ac63bf65d4d7" 00:22:23.205 }, 00:22:23.205 { 00:22:23.205 "nsid": 2, 00:22:23.205 "bdev_name": "Malloc1", 00:22:23.205 "name": "Malloc1", 00:22:23.205 "nguid": "D725756BB7564FEEBDDFD9D93D3B2E1E", 00:22:23.205 "uuid": "d725756b-b756-4fee-bddf-d9d93d3b2e1e" 00:22:23.205 } 00:22:23.205 ] 00:22:23.205 } 00:22:23.205 ] 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2918233 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:23.205 rmmod nvme_tcp 00:22:23.205 rmmod nvme_fabrics 00:22:23.205 rmmod nvme_keyring 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2918030 ']' 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2918030 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2918030 ']' 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2918030 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.205 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2918030 00:22:23.465 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:23.465 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:23.465 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2918030' 00:22:23.465 killing process with pid 2918030 00:22:23.465 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2918030 00:22:23.465 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2918030 00:22:23.465 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:23.465 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:23.465 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:23.465 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:22:23.465 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:22:23.465 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:23.465 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:22:23.465 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:23.465 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:23.465 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.465 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.465 13:14:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.109 13:14:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:26.109 00:22:26.109 real 0m9.892s 00:22:26.109 user 0m7.823s 00:22:26.109 sys 0m4.932s 00:22:26.109 13:14:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:26.109 13:14:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:26.109 ************************************ 00:22:26.109 END TEST nvmf_aer 00:22:26.109 ************************************ 00:22:26.109 13:14:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:26.109 13:14:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:26.109 13:14:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:26.109 13:14:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.109 ************************************ 00:22:26.109 START TEST nvmf_async_init 00:22:26.109 ************************************ 00:22:26.109 13:14:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:26.109 * Looking for test storage... 00:22:26.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:26.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.109 --rc genhtml_branch_coverage=1 00:22:26.109 --rc genhtml_function_coverage=1 00:22:26.109 --rc genhtml_legend=1 00:22:26.109 --rc geninfo_all_blocks=1 00:22:26.109 --rc geninfo_unexecuted_blocks=1 00:22:26.109 00:22:26.109 ' 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:26.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.109 --rc genhtml_branch_coverage=1 00:22:26.109 --rc genhtml_function_coverage=1 00:22:26.109 --rc genhtml_legend=1 00:22:26.109 --rc geninfo_all_blocks=1 00:22:26.109 --rc geninfo_unexecuted_blocks=1 00:22:26.109 00:22:26.109 ' 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:26.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.109 --rc genhtml_branch_coverage=1 00:22:26.109 --rc genhtml_function_coverage=1 00:22:26.109 --rc genhtml_legend=1 00:22:26.109 --rc geninfo_all_blocks=1 00:22:26.109 --rc geninfo_unexecuted_blocks=1 00:22:26.109 00:22:26.109 ' 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:26.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.109 --rc genhtml_branch_coverage=1 00:22:26.109 --rc genhtml_function_coverage=1 00:22:26.109 --rc genhtml_legend=1 00:22:26.109 --rc geninfo_all_blocks=1 00:22:26.109 --rc geninfo_unexecuted_blocks=1 00:22:26.109 00:22:26.109 ' 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:26.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:26.109 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:26.110 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:26.110 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:26.110 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:26.110 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:26.110 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:26.110 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:26.110 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=15e10a557909408988bf9038539a2100 00:22:26.110 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:26.110 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:26.110 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.110 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:26.110 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:26.110 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:26.110 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.110 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.110 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.110 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:26.110 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:26.110 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:26.110 13:14:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.385 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:31.385 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:31.645 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:31.645 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:31.645 Found net devices under 0000:86:00.0: cvl_0_0 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:31.645 Found net devices under 0000:86:00.1: cvl_0_1 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:31.645 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:31.646 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.646 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.646 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:31.646 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:31.646 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:31.646 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:31.646 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:31.646 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:31.646 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:31.646 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.646 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:31.646 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:31.646 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:31.646 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:31.646 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:31.646 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:31.646 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:31.646 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:31.646 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:31.646 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:31.646 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:31.646 13:14:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:31.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:22:31.646 00:22:31.646 --- 10.0.0.2 ping statistics --- 00:22:31.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.646 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:22:31.646 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:31.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:22:31.646 00:22:31.646 --- 10.0.0.1 ping statistics --- 00:22:31.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.646 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:22:31.646 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.646 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:22:31.646 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:31.646 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.646 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:31.646 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:31.646 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.646 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:31.646 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:31.906 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:31.906 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:31.906 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:31.906 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.906 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2921763 00:22:31.906 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:31.906 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2921763 00:22:31.906 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2921763 ']' 00:22:31.906 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.906 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:31.906 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.906 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:31.906 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:31.906 [2024-11-19 13:14:35.113791] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:22:31.906 [2024-11-19 13:14:35.113844] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.906 [2024-11-19 13:14:35.195760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.906 [2024-11-19 13:14:35.236134] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.906 [2024-11-19 13:14:35.236172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.906 [2024-11-19 13:14:35.236180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.906 [2024-11-19 13:14:35.236187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.906 [2024-11-19 13:14:35.236193] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.906 [2024-11-19 13:14:35.236762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.165 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.166 [2024-11-19 13:14:35.380840] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.166 null0 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 15e10a557909408988bf9038539a2100 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.166 [2024-11-19 13:14:35.429112] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.166 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.425 nvme0n1 00:22:32.425 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.425 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:32.425 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.425 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.425 [ 00:22:32.425 { 00:22:32.425 "name": "nvme0n1", 00:22:32.425 "aliases": [ 00:22:32.425 "15e10a55-7909-4089-88bf-9038539a2100" 00:22:32.425 ], 00:22:32.425 "product_name": "NVMe disk", 00:22:32.425 "block_size": 512, 00:22:32.425 "num_blocks": 2097152, 00:22:32.425 "uuid": "15e10a55-7909-4089-88bf-9038539a2100", 00:22:32.425 "numa_id": 1, 00:22:32.425 "assigned_rate_limits": { 00:22:32.425 "rw_ios_per_sec": 0, 00:22:32.425 "rw_mbytes_per_sec": 0, 00:22:32.425 "r_mbytes_per_sec": 0, 00:22:32.425 "w_mbytes_per_sec": 0 00:22:32.425 }, 00:22:32.425 "claimed": false, 00:22:32.425 "zoned": false, 00:22:32.425 "supported_io_types": { 00:22:32.425 "read": true, 00:22:32.425 "write": true, 00:22:32.425 "unmap": false, 00:22:32.425 "flush": true, 00:22:32.425 "reset": true, 00:22:32.425 "nvme_admin": true, 00:22:32.425 "nvme_io": true, 00:22:32.425 "nvme_io_md": false, 00:22:32.425 "write_zeroes": true, 00:22:32.425 "zcopy": false, 00:22:32.425 "get_zone_info": false, 00:22:32.425 "zone_management": false, 00:22:32.425 "zone_append": false, 00:22:32.425 "compare": true, 00:22:32.425 "compare_and_write": true, 00:22:32.425 "abort": true, 00:22:32.425 "seek_hole": false, 00:22:32.425 "seek_data": false, 00:22:32.425 "copy": true, 00:22:32.425 "nvme_iov_md": false 00:22:32.425 }, 00:22:32.425 "memory_domains": [ 00:22:32.425 { 00:22:32.425 "dma_device_id": "system", 00:22:32.425 "dma_device_type": 1 00:22:32.425 } 00:22:32.425 ], 00:22:32.426 "driver_specific": { 00:22:32.426 "nvme": [ 00:22:32.426 { 00:22:32.426 "trid": { 00:22:32.426 "trtype": "TCP", 00:22:32.426 "adrfam": "IPv4", 00:22:32.426 "traddr": "10.0.0.2", 00:22:32.426 "trsvcid": "4420", 00:22:32.426 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:32.426 }, 00:22:32.426 "ctrlr_data": { 00:22:32.426 "cntlid": 1, 00:22:32.426 "vendor_id": "0x8086", 00:22:32.426 "model_number": "SPDK bdev Controller", 00:22:32.426 "serial_number": "00000000000000000000", 00:22:32.426 "firmware_revision": "25.01", 00:22:32.426 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:32.426 "oacs": { 00:22:32.426 "security": 0, 00:22:32.426 "format": 0, 00:22:32.426 "firmware": 0, 00:22:32.426 "ns_manage": 0 00:22:32.426 }, 00:22:32.426 "multi_ctrlr": true, 00:22:32.426 "ana_reporting": false 00:22:32.426 }, 00:22:32.426 "vs": { 00:22:32.426 "nvme_version": "1.3" 00:22:32.426 }, 00:22:32.426 "ns_data": { 00:22:32.426 "id": 1, 00:22:32.426 "can_share": true 00:22:32.426 } 00:22:32.426 } 00:22:32.426 ], 00:22:32.426 "mp_policy": "active_passive" 00:22:32.426 } 00:22:32.426 } 00:22:32.426 ] 00:22:32.426 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.426 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:32.426 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.426 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.426 [2024-11-19 13:14:35.689615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:32.426 [2024-11-19 13:14:35.689676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a7220 (9): Bad file descriptor 00:22:32.685 [2024-11-19 13:14:35.822043] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:22:32.685 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.685 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:32.685 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.685 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.685 [ 00:22:32.685 { 00:22:32.685 "name": "nvme0n1", 00:22:32.685 "aliases": [ 00:22:32.685 "15e10a55-7909-4089-88bf-9038539a2100" 00:22:32.685 ], 00:22:32.685 "product_name": "NVMe disk", 00:22:32.685 "block_size": 512, 00:22:32.685 "num_blocks": 2097152, 00:22:32.685 "uuid": "15e10a55-7909-4089-88bf-9038539a2100", 00:22:32.685 "numa_id": 1, 00:22:32.685 "assigned_rate_limits": { 00:22:32.685 "rw_ios_per_sec": 0, 00:22:32.685 "rw_mbytes_per_sec": 0, 00:22:32.685 "r_mbytes_per_sec": 0, 00:22:32.685 "w_mbytes_per_sec": 0 00:22:32.685 }, 00:22:32.685 "claimed": false, 00:22:32.685 "zoned": false, 00:22:32.685 "supported_io_types": { 00:22:32.685 "read": true, 00:22:32.685 "write": true, 00:22:32.685 "unmap": false, 00:22:32.685 "flush": true, 00:22:32.685 "reset": true, 00:22:32.685 "nvme_admin": true, 00:22:32.685 "nvme_io": true, 00:22:32.685 "nvme_io_md": false, 00:22:32.685 "write_zeroes": true, 00:22:32.685 "zcopy": false, 00:22:32.685 "get_zone_info": false, 00:22:32.685 "zone_management": false, 00:22:32.685 "zone_append": false, 00:22:32.685 "compare": true, 00:22:32.685 "compare_and_write": true, 00:22:32.685 "abort": true, 00:22:32.685 "seek_hole": false, 00:22:32.685 "seek_data": false, 00:22:32.685 "copy": true, 00:22:32.685 "nvme_iov_md": false 00:22:32.685 }, 00:22:32.685 "memory_domains": [ 00:22:32.685 { 00:22:32.685 "dma_device_id": "system", 00:22:32.685 "dma_device_type": 1 00:22:32.685 } 00:22:32.685 ], 00:22:32.685 "driver_specific": { 00:22:32.685 "nvme": [ 00:22:32.685 { 00:22:32.685 "trid": { 00:22:32.685 "trtype": "TCP", 00:22:32.685 "adrfam": "IPv4", 00:22:32.685 "traddr": "10.0.0.2", 00:22:32.685 "trsvcid": "4420", 00:22:32.685 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:32.685 }, 00:22:32.685 "ctrlr_data": { 00:22:32.685 "cntlid": 2, 00:22:32.685 "vendor_id": "0x8086", 00:22:32.685 "model_number": "SPDK bdev Controller", 00:22:32.685 "serial_number": "00000000000000000000", 00:22:32.685 "firmware_revision": "25.01", 00:22:32.685 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:32.685 "oacs": { 00:22:32.685 "security": 0, 00:22:32.685 "format": 0, 00:22:32.685 "firmware": 0, 00:22:32.685 "ns_manage": 0 00:22:32.685 }, 00:22:32.685 "multi_ctrlr": true, 00:22:32.685 "ana_reporting": false 00:22:32.685 }, 00:22:32.685 "vs": { 00:22:32.685 "nvme_version": "1.3" 00:22:32.685 }, 00:22:32.685 "ns_data": { 00:22:32.685 "id": 1, 00:22:32.686 "can_share": true 00:22:32.686 } 00:22:32.686 } 00:22:32.686 ], 00:22:32.686 "mp_policy": "active_passive" 00:22:32.686 } 00:22:32.686 } 00:22:32.686 ] 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.U8SaiIA9qK 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.U8SaiIA9qK 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.U8SaiIA9qK 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.686 [2024-11-19 13:14:35.898257] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:32.686 [2024-11-19 13:14:35.898379] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.686 [2024-11-19 13:14:35.918320] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:32.686 nvme0n1 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.686 13:14:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.686 [ 00:22:32.686 { 00:22:32.686 "name": "nvme0n1", 00:22:32.686 "aliases": [ 00:22:32.686 "15e10a55-7909-4089-88bf-9038539a2100" 00:22:32.686 ], 00:22:32.686 "product_name": "NVMe disk", 00:22:32.686 "block_size": 512, 00:22:32.686 "num_blocks": 2097152, 00:22:32.686 "uuid": "15e10a55-7909-4089-88bf-9038539a2100", 00:22:32.686 "numa_id": 1, 00:22:32.686 "assigned_rate_limits": { 00:22:32.686 "rw_ios_per_sec": 0, 00:22:32.686 "rw_mbytes_per_sec": 0, 00:22:32.686 "r_mbytes_per_sec": 0, 00:22:32.686 "w_mbytes_per_sec": 0 00:22:32.686 }, 00:22:32.686 "claimed": false, 00:22:32.686 "zoned": false, 00:22:32.686 "supported_io_types": { 00:22:32.686 "read": true, 00:22:32.686 "write": true, 00:22:32.686 "unmap": false, 00:22:32.686 "flush": true, 00:22:32.686 "reset": true, 00:22:32.686 "nvme_admin": true, 00:22:32.686 "nvme_io": true, 00:22:32.686 "nvme_io_md": false, 00:22:32.686 "write_zeroes": true, 00:22:32.686 "zcopy": false, 00:22:32.686 "get_zone_info": false, 00:22:32.686 "zone_management": false, 00:22:32.686 "zone_append": false, 00:22:32.686 "compare": true, 00:22:32.686 "compare_and_write": true, 00:22:32.686 "abort": true, 00:22:32.686 "seek_hole": false, 00:22:32.686 "seek_data": false, 00:22:32.686 "copy": true, 00:22:32.686 "nvme_iov_md": false 00:22:32.686 }, 00:22:32.686 "memory_domains": [ 00:22:32.686 { 00:22:32.686 "dma_device_id": "system", 00:22:32.686 "dma_device_type": 1 00:22:32.686 } 00:22:32.686 ], 00:22:32.686 "driver_specific": { 00:22:32.686 "nvme": [ 00:22:32.686 { 00:22:32.686 "trid": { 00:22:32.686 "trtype": "TCP", 00:22:32.686 "adrfam": "IPv4", 00:22:32.686 "traddr": "10.0.0.2", 00:22:32.686 "trsvcid": "4421", 00:22:32.686 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:32.686 }, 00:22:32.686 "ctrlr_data": { 00:22:32.686 "cntlid": 3, 00:22:32.686 "vendor_id": "0x8086", 00:22:32.686 "model_number": "SPDK bdev Controller", 00:22:32.686 "serial_number": "00000000000000000000", 00:22:32.686 "firmware_revision": "25.01", 00:22:32.686 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:32.686 "oacs": { 00:22:32.686 "security": 0, 00:22:32.686 "format": 0, 00:22:32.686 "firmware": 0, 00:22:32.686 "ns_manage": 0 00:22:32.686 }, 00:22:32.686 "multi_ctrlr": true, 00:22:32.686 "ana_reporting": false 00:22:32.686 }, 00:22:32.686 "vs": { 00:22:32.686 "nvme_version": "1.3" 00:22:32.686 }, 00:22:32.686 "ns_data": { 00:22:32.686 "id": 1, 00:22:32.686 "can_share": true 00:22:32.686 } 00:22:32.686 } 00:22:32.686 ], 00:22:32.686 "mp_policy": "active_passive" 00:22:32.686 } 00:22:32.686 } 00:22:32.686 ] 00:22:32.686 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.686 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.686 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.686 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.686 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.686 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.U8SaiIA9qK 00:22:32.686 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:32.686 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:32.686 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:32.686 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:32.686 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:32.686 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:32.686 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:32.686 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:32.686 rmmod nvme_tcp 00:22:32.686 rmmod nvme_fabrics 00:22:32.945 rmmod nvme_keyring 00:22:32.945 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:32.945 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:32.945 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:32.945 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2921763 ']' 00:22:32.945 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2921763 00:22:32.945 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2921763 ']' 00:22:32.945 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2921763 00:22:32.945 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:22:32.945 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.945 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2921763 00:22:32.945 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:32.945 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:32.945 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2921763' 00:22:32.945 killing process with pid 2921763 00:22:32.945 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2921763 00:22:32.945 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2921763 00:22:32.945 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:32.945 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:32.945 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:32.945 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:32.945 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:32.945 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:32.946 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:32.946 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:32.946 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:32.946 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.946 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.946 13:14:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:35.484 00:22:35.484 real 0m9.446s 00:22:35.484 user 0m3.055s 00:22:35.484 sys 0m4.843s 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:35.484 ************************************ 00:22:35.484 END TEST nvmf_async_init 00:22:35.484 ************************************ 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.484 ************************************ 00:22:35.484 START TEST dma 00:22:35.484 ************************************ 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:35.484 * Looking for test storage... 00:22:35.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:35.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.484 --rc genhtml_branch_coverage=1 00:22:35.484 --rc genhtml_function_coverage=1 00:22:35.484 --rc genhtml_legend=1 00:22:35.484 --rc geninfo_all_blocks=1 00:22:35.484 --rc geninfo_unexecuted_blocks=1 00:22:35.484 00:22:35.484 ' 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:35.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.484 --rc genhtml_branch_coverage=1 00:22:35.484 --rc genhtml_function_coverage=1 00:22:35.484 --rc genhtml_legend=1 00:22:35.484 --rc geninfo_all_blocks=1 00:22:35.484 --rc geninfo_unexecuted_blocks=1 00:22:35.484 00:22:35.484 ' 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:35.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.484 --rc genhtml_branch_coverage=1 00:22:35.484 --rc genhtml_function_coverage=1 00:22:35.484 --rc genhtml_legend=1 00:22:35.484 --rc geninfo_all_blocks=1 00:22:35.484 --rc geninfo_unexecuted_blocks=1 00:22:35.484 00:22:35.484 ' 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:35.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.484 --rc genhtml_branch_coverage=1 00:22:35.484 --rc genhtml_function_coverage=1 00:22:35.484 --rc genhtml_legend=1 00:22:35.484 --rc geninfo_all_blocks=1 00:22:35.484 --rc geninfo_unexecuted_blocks=1 00:22:35.484 00:22:35.484 ' 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.484 13:14:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:35.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:35.485 00:22:35.485 real 0m0.204s 00:22:35.485 user 0m0.124s 00:22:35.485 sys 0m0.094s 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:35.485 ************************************ 00:22:35.485 END TEST dma 00:22:35.485 ************************************ 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.485 ************************************ 00:22:35.485 START TEST nvmf_identify 00:22:35.485 ************************************ 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:35.485 * Looking for test storage... 00:22:35.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:22:35.485 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:35.745 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:35.745 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:35.745 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:35.745 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:35.745 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.745 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:35.745 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:35.745 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:35.745 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:35.745 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:35.745 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:35.745 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:35.745 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:35.745 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:35.745 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:35.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.746 --rc genhtml_branch_coverage=1 00:22:35.746 --rc genhtml_function_coverage=1 00:22:35.746 --rc genhtml_legend=1 00:22:35.746 --rc geninfo_all_blocks=1 00:22:35.746 --rc geninfo_unexecuted_blocks=1 00:22:35.746 00:22:35.746 ' 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:35.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.746 --rc genhtml_branch_coverage=1 00:22:35.746 --rc genhtml_function_coverage=1 00:22:35.746 --rc genhtml_legend=1 00:22:35.746 --rc geninfo_all_blocks=1 00:22:35.746 --rc geninfo_unexecuted_blocks=1 00:22:35.746 00:22:35.746 ' 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:35.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.746 --rc genhtml_branch_coverage=1 00:22:35.746 --rc genhtml_function_coverage=1 00:22:35.746 --rc genhtml_legend=1 00:22:35.746 --rc geninfo_all_blocks=1 00:22:35.746 --rc geninfo_unexecuted_blocks=1 00:22:35.746 00:22:35.746 ' 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:35.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.746 --rc genhtml_branch_coverage=1 00:22:35.746 --rc genhtml_function_coverage=1 00:22:35.746 --rc genhtml_legend=1 00:22:35.746 --rc geninfo_all_blocks=1 00:22:35.746 --rc geninfo_unexecuted_blocks=1 00:22:35.746 00:22:35.746 ' 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:35.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:35.746 13:14:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.328 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.328 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:42.328 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:42.328 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:42.328 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:42.328 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:42.328 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:42.328 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:42.328 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:42.328 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:42.328 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:42.328 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:42.328 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:42.328 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:42.328 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:42.328 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.328 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.328 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.328 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.328 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.328 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.328 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:42.329 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:42.329 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:42.329 Found net devices under 0000:86:00.0: cvl_0_0 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:42.329 Found net devices under 0000:86:00.1: cvl_0_1 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:42.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:22:42.329 00:22:42.329 --- 10.0.0.2 ping statistics --- 00:22:42.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.329 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:42.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:22:42.329 00:22:42.329 --- 10.0.0.1 ping statistics --- 00:22:42.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.329 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2925582 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2925582 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2925582 ']' 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:42.329 13:14:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.329 [2024-11-19 13:14:44.916488] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:22:42.329 [2024-11-19 13:14:44.916533] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.329 [2024-11-19 13:14:44.994679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:42.329 [2024-11-19 13:14:45.035946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.330 [2024-11-19 13:14:45.035987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.330 [2024-11-19 13:14:45.035995] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.330 [2024-11-19 13:14:45.036003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.330 [2024-11-19 13:14:45.036007] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.330 [2024-11-19 13:14:45.037657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.330 [2024-11-19 13:14:45.037767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.330 [2024-11-19 13:14:45.037875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.330 [2024-11-19 13:14:45.037876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.330 [2024-11-19 13:14:45.150752] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.330 Malloc0 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.330 [2024-11-19 13:14:45.253569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.330 [ 00:22:42.330 { 00:22:42.330 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:42.330 "subtype": "Discovery", 00:22:42.330 "listen_addresses": [ 00:22:42.330 { 00:22:42.330 "trtype": "TCP", 00:22:42.330 "adrfam": "IPv4", 00:22:42.330 "traddr": "10.0.0.2", 00:22:42.330 "trsvcid": "4420" 00:22:42.330 } 00:22:42.330 ], 00:22:42.330 "allow_any_host": true, 00:22:42.330 "hosts": [] 00:22:42.330 }, 00:22:42.330 { 00:22:42.330 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.330 "subtype": "NVMe", 00:22:42.330 "listen_addresses": [ 00:22:42.330 { 00:22:42.330 "trtype": "TCP", 00:22:42.330 "adrfam": "IPv4", 00:22:42.330 "traddr": "10.0.0.2", 00:22:42.330 "trsvcid": "4420" 00:22:42.330 } 00:22:42.330 ], 00:22:42.330 "allow_any_host": true, 00:22:42.330 "hosts": [], 00:22:42.330 "serial_number": "SPDK00000000000001", 00:22:42.330 "model_number": "SPDK bdev Controller", 00:22:42.330 "max_namespaces": 32, 00:22:42.330 "min_cntlid": 1, 00:22:42.330 "max_cntlid": 65519, 00:22:42.330 "namespaces": [ 00:22:42.330 { 00:22:42.330 "nsid": 1, 00:22:42.330 "bdev_name": "Malloc0", 00:22:42.330 "name": "Malloc0", 00:22:42.330 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:42.330 "eui64": "ABCDEF0123456789", 00:22:42.330 "uuid": "b9a2e5ee-4100-4e84-b93a-6f082718fe60" 00:22:42.330 } 00:22:42.330 ] 00:22:42.330 } 00:22:42.330 ] 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.330 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:42.330 [2024-11-19 13:14:45.305226] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:22:42.330 [2024-11-19 13:14:45.305266] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2925682 ] 00:22:42.330 [2024-11-19 13:14:45.346932] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:42.330 [2024-11-19 13:14:45.350984] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:42.330 [2024-11-19 13:14:45.350990] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:42.330 [2024-11-19 13:14:45.351001] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:42.330 [2024-11-19 13:14:45.351011] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:42.330 [2024-11-19 13:14:45.351570] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:42.330 [2024-11-19 13:14:45.351598] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x84f690 0 00:22:42.330 [2024-11-19 13:14:45.357969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:42.330 [2024-11-19 13:14:45.357985] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:42.330 [2024-11-19 13:14:45.357990] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:42.330 [2024-11-19 13:14:45.357993] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:42.330 [2024-11-19 13:14:45.358026] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.330 [2024-11-19 13:14:45.358032] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.330 [2024-11-19 13:14:45.358035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84f690) 00:22:42.330 [2024-11-19 13:14:45.358047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:42.330 [2024-11-19 13:14:45.358063] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1100, cid 0, qid 0 00:22:42.330 [2024-11-19 13:14:45.365958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.330 [2024-11-19 13:14:45.365967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.330 [2024-11-19 13:14:45.365971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.330 [2024-11-19 13:14:45.365974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1100) on tqpair=0x84f690 00:22:42.330 [2024-11-19 13:14:45.365985] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:42.330 [2024-11-19 13:14:45.365992] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:42.330 [2024-11-19 13:14:45.366000] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:42.330 [2024-11-19 13:14:45.366013] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.330 [2024-11-19 13:14:45.366016] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.330 [2024-11-19 13:14:45.366020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84f690) 00:22:42.330 [2024-11-19 13:14:45.366027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.330 [2024-11-19 13:14:45.366039] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1100, cid 0, qid 0 00:22:42.330 [2024-11-19 13:14:45.366146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.330 [2024-11-19 13:14:45.366152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.330 [2024-11-19 13:14:45.366155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.330 [2024-11-19 13:14:45.366158] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1100) on tqpair=0x84f690 00:22:42.330 [2024-11-19 13:14:45.366163] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:42.330 [2024-11-19 13:14:45.366169] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:42.330 [2024-11-19 13:14:45.366176] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.330 [2024-11-19 13:14:45.366179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.330 [2024-11-19 13:14:45.366182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84f690) 00:22:42.331 [2024-11-19 13:14:45.366188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-19 13:14:45.366199] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1100, cid 0, qid 0 00:22:42.331 [2024-11-19 13:14:45.366293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.331 [2024-11-19 13:14:45.366299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.331 [2024-11-19 13:14:45.366301] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.366305] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1100) on tqpair=0x84f690 00:22:42.331 [2024-11-19 13:14:45.366309] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:42.331 [2024-11-19 13:14:45.366316] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:42.331 [2024-11-19 13:14:45.366322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.366325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.366328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84f690) 00:22:42.331 [2024-11-19 13:14:45.366334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-19 13:14:45.366343] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1100, cid 0, qid 0 00:22:42.331 [2024-11-19 13:14:45.366405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.331 [2024-11-19 13:14:45.366411] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.331 [2024-11-19 13:14:45.366413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.366417] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1100) on tqpair=0x84f690 00:22:42.331 [2024-11-19 13:14:45.366421] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:42.331 [2024-11-19 13:14:45.366431] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.366435] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.366438] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84f690) 00:22:42.331 [2024-11-19 13:14:45.366444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-19 13:14:45.366453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1100, cid 0, qid 0 00:22:42.331 [2024-11-19 13:14:45.366545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.331 [2024-11-19 13:14:45.366551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.331 [2024-11-19 13:14:45.366554] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.366557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1100) on tqpair=0x84f690 00:22:42.331 [2024-11-19 13:14:45.366562] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:42.331 [2024-11-19 13:14:45.366566] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:42.331 [2024-11-19 13:14:45.366572] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:42.331 [2024-11-19 13:14:45.366680] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:42.331 [2024-11-19 13:14:45.366684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:42.331 [2024-11-19 13:14:45.366691] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.366695] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.366698] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84f690) 00:22:42.331 [2024-11-19 13:14:45.366703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-19 13:14:45.366713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1100, cid 0, qid 0 00:22:42.331 [2024-11-19 13:14:45.366778] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.331 [2024-11-19 13:14:45.366784] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.331 [2024-11-19 13:14:45.366787] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.366790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1100) on tqpair=0x84f690 00:22:42.331 [2024-11-19 13:14:45.366794] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:42.331 [2024-11-19 13:14:45.366802] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.366805] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.366808] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84f690) 00:22:42.331 [2024-11-19 13:14:45.366814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-19 13:14:45.366824] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1100, cid 0, qid 0 00:22:42.331 [2024-11-19 13:14:45.366931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.331 [2024-11-19 13:14:45.366936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.331 [2024-11-19 13:14:45.366940] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.366943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1100) on tqpair=0x84f690 00:22:42.331 [2024-11-19 13:14:45.366956] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:42.331 [2024-11-19 13:14:45.366963] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:42.331 [2024-11-19 13:14:45.366970] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:42.331 [2024-11-19 13:14:45.366980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:42.331 [2024-11-19 13:14:45.366988] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.366991] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84f690) 00:22:42.331 [2024-11-19 13:14:45.366996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.331 [2024-11-19 13:14:45.367007] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1100, cid 0, qid 0 00:22:42.331 [2024-11-19 13:14:45.367096] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:42.331 [2024-11-19 13:14:45.367102] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:42.331 [2024-11-19 13:14:45.367105] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.367108] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x84f690): datao=0, datal=4096, cccid=0 00:22:42.331 [2024-11-19 13:14:45.367112] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8b1100) on tqpair(0x84f690): expected_datao=0, payload_size=4096 00:22:42.331 [2024-11-19 13:14:45.367116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.367134] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.367138] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.367180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.331 [2024-11-19 13:14:45.367186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.331 [2024-11-19 13:14:45.367189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.367192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1100) on tqpair=0x84f690 00:22:42.331 [2024-11-19 13:14:45.367199] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:42.331 [2024-11-19 13:14:45.367204] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:42.331 [2024-11-19 13:14:45.367208] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:42.331 [2024-11-19 13:14:45.367215] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:42.331 [2024-11-19 13:14:45.367220] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:42.331 [2024-11-19 13:14:45.367224] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:42.331 [2024-11-19 13:14:45.367232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:42.331 [2024-11-19 13:14:45.367238] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.367242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.367245] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84f690) 00:22:42.331 [2024-11-19 13:14:45.367251] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:42.331 [2024-11-19 13:14:45.367261] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1100, cid 0, qid 0 00:22:42.331 [2024-11-19 13:14:45.367323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.331 [2024-11-19 13:14:45.367329] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.331 [2024-11-19 13:14:45.367332] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.367335] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1100) on tqpair=0x84f690 00:22:42.331 [2024-11-19 13:14:45.367341] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.367344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.367348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84f690) 00:22:42.331 [2024-11-19 13:14:45.367353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.331 [2024-11-19 13:14:45.367358] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.367361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.331 [2024-11-19 13:14:45.367364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x84f690) 00:22:42.331 [2024-11-19 13:14:45.367369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.331 [2024-11-19 13:14:45.367374] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.367377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.367380] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x84f690) 00:22:42.332 [2024-11-19 13:14:45.367385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.332 [2024-11-19 13:14:45.367390] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.367394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.367397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.332 [2024-11-19 13:14:45.367401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.332 [2024-11-19 13:14:45.367406] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:42.332 [2024-11-19 13:14:45.367413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:42.332 [2024-11-19 13:14:45.367419] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.367422] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x84f690) 00:22:42.332 [2024-11-19 13:14:45.367427] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-19 13:14:45.367438] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1100, cid 0, qid 0 00:22:42.332 [2024-11-19 13:14:45.367443] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1280, cid 1, qid 0 00:22:42.332 [2024-11-19 13:14:45.367447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1400, cid 2, qid 0 00:22:42.332 [2024-11-19 13:14:45.367451] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.332 [2024-11-19 13:14:45.367455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1700, cid 4, qid 0 00:22:42.332 [2024-11-19 13:14:45.367572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.332 [2024-11-19 13:14:45.367578] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.332 [2024-11-19 13:14:45.367581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.367584] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1700) on tqpair=0x84f690 00:22:42.332 [2024-11-19 13:14:45.367592] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:42.332 [2024-11-19 13:14:45.367597] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:42.332 [2024-11-19 13:14:45.367605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.367609] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x84f690) 00:22:42.332 [2024-11-19 13:14:45.367614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-19 13:14:45.367624] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1700, cid 4, qid 0 00:22:42.332 [2024-11-19 13:14:45.367697] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:42.332 [2024-11-19 13:14:45.367703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:42.332 [2024-11-19 13:14:45.367706] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.367709] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x84f690): datao=0, datal=4096, cccid=4 00:22:42.332 [2024-11-19 13:14:45.367713] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8b1700) on tqpair(0x84f690): expected_datao=0, payload_size=4096 00:22:42.332 [2024-11-19 13:14:45.367717] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.367723] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.367726] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.367771] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.332 [2024-11-19 13:14:45.367777] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.332 [2024-11-19 13:14:45.367780] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.367783] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1700) on tqpair=0x84f690 00:22:42.332 [2024-11-19 13:14:45.367794] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:42.332 [2024-11-19 13:14:45.367814] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.367818] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x84f690) 00:22:42.332 [2024-11-19 13:14:45.367824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-19 13:14:45.367830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.367833] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.367836] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x84f690) 00:22:42.332 [2024-11-19 13:14:45.367841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.332 [2024-11-19 13:14:45.367855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1700, cid 4, qid 0 00:22:42.332 [2024-11-19 13:14:45.367860] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1880, cid 5, qid 0 00:22:42.332 [2024-11-19 13:14:45.367979] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:42.332 [2024-11-19 13:14:45.367986] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:42.332 [2024-11-19 13:14:45.367989] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.367992] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x84f690): datao=0, datal=1024, cccid=4 00:22:42.332 [2024-11-19 13:14:45.367995] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8b1700) on tqpair(0x84f690): expected_datao=0, payload_size=1024 00:22:42.332 [2024-11-19 13:14:45.367999] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.368007] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.368011] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.368015] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.332 [2024-11-19 13:14:45.368020] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.332 [2024-11-19 13:14:45.368023] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.368026] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1880) on tqpair=0x84f690 00:22:42.332 [2024-11-19 13:14:45.409088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.332 [2024-11-19 13:14:45.409100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.332 [2024-11-19 13:14:45.409103] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.409107] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1700) on tqpair=0x84f690 00:22:42.332 [2024-11-19 13:14:45.409118] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.409121] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x84f690) 00:22:42.332 [2024-11-19 13:14:45.409128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-19 13:14:45.409144] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1700, cid 4, qid 0 00:22:42.332 [2024-11-19 13:14:45.409215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:42.332 [2024-11-19 13:14:45.409221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:42.332 [2024-11-19 13:14:45.409224] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.409227] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x84f690): datao=0, datal=3072, cccid=4 00:22:42.332 [2024-11-19 13:14:45.409232] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8b1700) on tqpair(0x84f690): expected_datao=0, payload_size=3072 00:22:42.332 [2024-11-19 13:14:45.409236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.409265] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.409269] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.409338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.332 [2024-11-19 13:14:45.409343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.332 [2024-11-19 13:14:45.409346] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.409350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1700) on tqpair=0x84f690 00:22:42.332 [2024-11-19 13:14:45.409356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.409360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x84f690) 00:22:42.332 [2024-11-19 13:14:45.409365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.332 [2024-11-19 13:14:45.409379] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1700, cid 4, qid 0 00:22:42.332 [2024-11-19 13:14:45.409450] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:42.332 [2024-11-19 13:14:45.409456] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:42.332 [2024-11-19 13:14:45.409459] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.409462] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x84f690): datao=0, datal=8, cccid=4 00:22:42.332 [2024-11-19 13:14:45.409465] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8b1700) on tqpair(0x84f690): expected_datao=0, payload_size=8 00:22:42.332 [2024-11-19 13:14:45.409469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.409475] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.409481] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.452962] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.332 [2024-11-19 13:14:45.452974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.332 [2024-11-19 13:14:45.452977] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.332 [2024-11-19 13:14:45.452980] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1700) on tqpair=0x84f690 00:22:42.332 ===================================================== 00:22:42.332 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:42.332 ===================================================== 00:22:42.332 Controller Capabilities/Features 00:22:42.332 ================================ 00:22:42.332 Vendor ID: 0000 00:22:42.332 Subsystem Vendor ID: 0000 00:22:42.332 Serial Number: .................... 00:22:42.332 Model Number: ........................................ 00:22:42.333 Firmware Version: 25.01 00:22:42.333 Recommended Arb Burst: 0 00:22:42.333 IEEE OUI Identifier: 00 00 00 00:22:42.333 Multi-path I/O 00:22:42.333 May have multiple subsystem ports: No 00:22:42.333 May have multiple controllers: No 00:22:42.333 Associated with SR-IOV VF: No 00:22:42.333 Max Data Transfer Size: 131072 00:22:42.333 Max Number of Namespaces: 0 00:22:42.333 Max Number of I/O Queues: 1024 00:22:42.333 NVMe Specification Version (VS): 1.3 00:22:42.333 NVMe Specification Version (Identify): 1.3 00:22:42.333 Maximum Queue Entries: 128 00:22:42.333 Contiguous Queues Required: Yes 00:22:42.333 Arbitration Mechanisms Supported 00:22:42.333 Weighted Round Robin: Not Supported 00:22:42.333 Vendor Specific: Not Supported 00:22:42.333 Reset Timeout: 15000 ms 00:22:42.333 Doorbell Stride: 4 bytes 00:22:42.333 NVM Subsystem Reset: Not Supported 00:22:42.333 Command Sets Supported 00:22:42.333 NVM Command Set: Supported 00:22:42.333 Boot Partition: Not Supported 00:22:42.333 Memory Page Size Minimum: 4096 bytes 00:22:42.333 Memory Page Size Maximum: 4096 bytes 00:22:42.333 Persistent Memory Region: Not Supported 00:22:42.333 Optional Asynchronous Events Supported 00:22:42.333 Namespace Attribute Notices: Not Supported 00:22:42.333 Firmware Activation Notices: Not Supported 00:22:42.333 ANA Change Notices: Not Supported 00:22:42.333 PLE Aggregate Log Change Notices: Not Supported 00:22:42.333 LBA Status Info Alert Notices: Not Supported 00:22:42.333 EGE Aggregate Log Change Notices: Not Supported 00:22:42.333 Normal NVM Subsystem Shutdown event: Not Supported 00:22:42.333 Zone Descriptor Change Notices: Not Supported 00:22:42.333 Discovery Log Change Notices: Supported 00:22:42.333 Controller Attributes 00:22:42.333 128-bit Host Identifier: Not Supported 00:22:42.333 Non-Operational Permissive Mode: Not Supported 00:22:42.333 NVM Sets: Not Supported 00:22:42.333 Read Recovery Levels: Not Supported 00:22:42.333 Endurance Groups: Not Supported 00:22:42.333 Predictable Latency Mode: Not Supported 00:22:42.333 Traffic Based Keep ALive: Not Supported 00:22:42.333 Namespace Granularity: Not Supported 00:22:42.333 SQ Associations: Not Supported 00:22:42.333 UUID List: Not Supported 00:22:42.333 Multi-Domain Subsystem: Not Supported 00:22:42.333 Fixed Capacity Management: Not Supported 00:22:42.333 Variable Capacity Management: Not Supported 00:22:42.333 Delete Endurance Group: Not Supported 00:22:42.333 Delete NVM Set: Not Supported 00:22:42.333 Extended LBA Formats Supported: Not Supported 00:22:42.333 Flexible Data Placement Supported: Not Supported 00:22:42.333 00:22:42.333 Controller Memory Buffer Support 00:22:42.333 ================================ 00:22:42.333 Supported: No 00:22:42.333 00:22:42.333 Persistent Memory Region Support 00:22:42.333 ================================ 00:22:42.333 Supported: No 00:22:42.333 00:22:42.333 Admin Command Set Attributes 00:22:42.333 ============================ 00:22:42.333 Security Send/Receive: Not Supported 00:22:42.333 Format NVM: Not Supported 00:22:42.333 Firmware Activate/Download: Not Supported 00:22:42.333 Namespace Management: Not Supported 00:22:42.333 Device Self-Test: Not Supported 00:22:42.333 Directives: Not Supported 00:22:42.333 NVMe-MI: Not Supported 00:22:42.333 Virtualization Management: Not Supported 00:22:42.333 Doorbell Buffer Config: Not Supported 00:22:42.333 Get LBA Status Capability: Not Supported 00:22:42.333 Command & Feature Lockdown Capability: Not Supported 00:22:42.333 Abort Command Limit: 1 00:22:42.333 Async Event Request Limit: 4 00:22:42.333 Number of Firmware Slots: N/A 00:22:42.333 Firmware Slot 1 Read-Only: N/A 00:22:42.333 Firmware Activation Without Reset: N/A 00:22:42.333 Multiple Update Detection Support: N/A 00:22:42.333 Firmware Update Granularity: No Information Provided 00:22:42.333 Per-Namespace SMART Log: No 00:22:42.333 Asymmetric Namespace Access Log Page: Not Supported 00:22:42.333 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:42.333 Command Effects Log Page: Not Supported 00:22:42.333 Get Log Page Extended Data: Supported 00:22:42.333 Telemetry Log Pages: Not Supported 00:22:42.333 Persistent Event Log Pages: Not Supported 00:22:42.333 Supported Log Pages Log Page: May Support 00:22:42.333 Commands Supported & Effects Log Page: Not Supported 00:22:42.333 Feature Identifiers & Effects Log Page:May Support 00:22:42.333 NVMe-MI Commands & Effects Log Page: May Support 00:22:42.333 Data Area 4 for Telemetry Log: Not Supported 00:22:42.333 Error Log Page Entries Supported: 128 00:22:42.333 Keep Alive: Not Supported 00:22:42.333 00:22:42.333 NVM Command Set Attributes 00:22:42.333 ========================== 00:22:42.333 Submission Queue Entry Size 00:22:42.333 Max: 1 00:22:42.333 Min: 1 00:22:42.333 Completion Queue Entry Size 00:22:42.333 Max: 1 00:22:42.333 Min: 1 00:22:42.333 Number of Namespaces: 0 00:22:42.333 Compare Command: Not Supported 00:22:42.333 Write Uncorrectable Command: Not Supported 00:22:42.333 Dataset Management Command: Not Supported 00:22:42.333 Write Zeroes Command: Not Supported 00:22:42.333 Set Features Save Field: Not Supported 00:22:42.333 Reservations: Not Supported 00:22:42.333 Timestamp: Not Supported 00:22:42.333 Copy: Not Supported 00:22:42.333 Volatile Write Cache: Not Present 00:22:42.333 Atomic Write Unit (Normal): 1 00:22:42.333 Atomic Write Unit (PFail): 1 00:22:42.333 Atomic Compare & Write Unit: 1 00:22:42.333 Fused Compare & Write: Supported 00:22:42.333 Scatter-Gather List 00:22:42.333 SGL Command Set: Supported 00:22:42.333 SGL Keyed: Supported 00:22:42.333 SGL Bit Bucket Descriptor: Not Supported 00:22:42.333 SGL Metadata Pointer: Not Supported 00:22:42.333 Oversized SGL: Not Supported 00:22:42.333 SGL Metadata Address: Not Supported 00:22:42.333 SGL Offset: Supported 00:22:42.333 Transport SGL Data Block: Not Supported 00:22:42.333 Replay Protected Memory Block: Not Supported 00:22:42.333 00:22:42.333 Firmware Slot Information 00:22:42.333 ========================= 00:22:42.333 Active slot: 0 00:22:42.333 00:22:42.333 00:22:42.333 Error Log 00:22:42.333 ========= 00:22:42.333 00:22:42.333 Active Namespaces 00:22:42.333 ================= 00:22:42.333 Discovery Log Page 00:22:42.333 ================== 00:22:42.333 Generation Counter: 2 00:22:42.333 Number of Records: 2 00:22:42.333 Record Format: 0 00:22:42.333 00:22:42.333 Discovery Log Entry 0 00:22:42.333 ---------------------- 00:22:42.333 Transport Type: 3 (TCP) 00:22:42.333 Address Family: 1 (IPv4) 00:22:42.333 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:42.333 Entry Flags: 00:22:42.333 Duplicate Returned Information: 1 00:22:42.333 Explicit Persistent Connection Support for Discovery: 1 00:22:42.333 Transport Requirements: 00:22:42.333 Secure Channel: Not Required 00:22:42.333 Port ID: 0 (0x0000) 00:22:42.333 Controller ID: 65535 (0xffff) 00:22:42.333 Admin Max SQ Size: 128 00:22:42.333 Transport Service Identifier: 4420 00:22:42.333 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:42.334 Transport Address: 10.0.0.2 00:22:42.334 Discovery Log Entry 1 00:22:42.334 ---------------------- 00:22:42.334 Transport Type: 3 (TCP) 00:22:42.334 Address Family: 1 (IPv4) 00:22:42.334 Subsystem Type: 2 (NVM Subsystem) 00:22:42.334 Entry Flags: 00:22:42.334 Duplicate Returned Information: 0 00:22:42.334 Explicit Persistent Connection Support for Discovery: 0 00:22:42.334 Transport Requirements: 00:22:42.334 Secure Channel: Not Required 00:22:42.334 Port ID: 0 (0x0000) 00:22:42.334 Controller ID: 65535 (0xffff) 00:22:42.334 Admin Max SQ Size: 128 00:22:42.334 Transport Service Identifier: 4420 00:22:42.334 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:42.334 Transport Address: 10.0.0.2 [2024-11-19 13:14:45.453064] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:42.334 [2024-11-19 13:14:45.453075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1100) on tqpair=0x84f690 00:22:42.334 [2024-11-19 13:14:45.453081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.334 [2024-11-19 13:14:45.453086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1280) on tqpair=0x84f690 00:22:42.334 [2024-11-19 13:14:45.453090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.334 [2024-11-19 13:14:45.453094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1400) on tqpair=0x84f690 00:22:42.334 [2024-11-19 13:14:45.453098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.334 [2024-11-19 13:14:45.453102] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.334 [2024-11-19 13:14:45.453106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.334 [2024-11-19 13:14:45.453116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.453120] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.453123] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.334 [2024-11-19 13:14:45.453130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.334 [2024-11-19 13:14:45.453144] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.334 [2024-11-19 13:14:45.453211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.334 [2024-11-19 13:14:45.453217] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.334 [2024-11-19 13:14:45.453221] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.453224] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.334 [2024-11-19 13:14:45.453230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.453233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.453236] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.334 [2024-11-19 13:14:45.453242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.334 [2024-11-19 13:14:45.453255] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.334 [2024-11-19 13:14:45.453359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.334 [2024-11-19 13:14:45.453364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.334 [2024-11-19 13:14:45.453367] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.453370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.334 [2024-11-19 13:14:45.453374] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:42.334 [2024-11-19 13:14:45.453379] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:42.334 [2024-11-19 13:14:45.453386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.453392] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.453395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.334 [2024-11-19 13:14:45.453401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.334 [2024-11-19 13:14:45.453411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.334 [2024-11-19 13:14:45.453475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.334 [2024-11-19 13:14:45.453481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.334 [2024-11-19 13:14:45.453484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.453487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.334 [2024-11-19 13:14:45.453496] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.453499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.453502] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.334 [2024-11-19 13:14:45.453508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.334 [2024-11-19 13:14:45.453518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.334 [2024-11-19 13:14:45.453612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.334 [2024-11-19 13:14:45.453617] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.334 [2024-11-19 13:14:45.453620] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.453624] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.334 [2024-11-19 13:14:45.453632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.453635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.453638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.334 [2024-11-19 13:14:45.453644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.334 [2024-11-19 13:14:45.453653] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.334 [2024-11-19 13:14:45.453762] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.334 [2024-11-19 13:14:45.453767] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.334 [2024-11-19 13:14:45.453770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.453773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.334 [2024-11-19 13:14:45.453781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.453785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.453788] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.334 [2024-11-19 13:14:45.453794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.334 [2024-11-19 13:14:45.453803] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.334 [2024-11-19 13:14:45.453914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.334 [2024-11-19 13:14:45.453920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.334 [2024-11-19 13:14:45.453923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.453926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.334 [2024-11-19 13:14:45.453934] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.453937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.453943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.334 [2024-11-19 13:14:45.453955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.334 [2024-11-19 13:14:45.453965] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.334 [2024-11-19 13:14:45.454053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.334 [2024-11-19 13:14:45.454059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.334 [2024-11-19 13:14:45.454061] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.454065] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.334 [2024-11-19 13:14:45.454074] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.454077] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.454080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.334 [2024-11-19 13:14:45.454086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.334 [2024-11-19 13:14:45.454095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.334 [2024-11-19 13:14:45.454225] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.334 [2024-11-19 13:14:45.454231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.334 [2024-11-19 13:14:45.454234] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.454237] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.334 [2024-11-19 13:14:45.454245] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.454249] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.454252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.334 [2024-11-19 13:14:45.454258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.334 [2024-11-19 13:14:45.454267] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.334 [2024-11-19 13:14:45.454369] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.334 [2024-11-19 13:14:45.454374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.334 [2024-11-19 13:14:45.454377] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.454380] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.334 [2024-11-19 13:14:45.454388] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.334 [2024-11-19 13:14:45.454392] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.454395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.335 [2024-11-19 13:14:45.454400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.335 [2024-11-19 13:14:45.454410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.335 [2024-11-19 13:14:45.454518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.335 [2024-11-19 13:14:45.454523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.335 [2024-11-19 13:14:45.454526] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.454529] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.335 [2024-11-19 13:14:45.454537] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.454541] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.454544] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.335 [2024-11-19 13:14:45.454551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.335 [2024-11-19 13:14:45.454560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.335 [2024-11-19 13:14:45.454623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.335 [2024-11-19 13:14:45.454628] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.335 [2024-11-19 13:14:45.454631] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.454634] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.335 [2024-11-19 13:14:45.454643] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.454646] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.454649] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.335 [2024-11-19 13:14:45.454655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.335 [2024-11-19 13:14:45.454664] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.335 [2024-11-19 13:14:45.454771] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.335 [2024-11-19 13:14:45.454777] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.335 [2024-11-19 13:14:45.454780] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.454783] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.335 [2024-11-19 13:14:45.454791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.454795] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.454798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.335 [2024-11-19 13:14:45.454803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.335 [2024-11-19 13:14:45.454812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.335 [2024-11-19 13:14:45.454921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.335 [2024-11-19 13:14:45.454926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.335 [2024-11-19 13:14:45.454929] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.454932] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.335 [2024-11-19 13:14:45.454941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.454944] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.454951] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.335 [2024-11-19 13:14:45.454957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.335 [2024-11-19 13:14:45.454966] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.335 [2024-11-19 13:14:45.455073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.335 [2024-11-19 13:14:45.455079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.335 [2024-11-19 13:14:45.455082] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.455085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.335 [2024-11-19 13:14:45.455093] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.455096] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.455099] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.335 [2024-11-19 13:14:45.455105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.335 [2024-11-19 13:14:45.455116] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.335 [2024-11-19 13:14:45.455182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.335 [2024-11-19 13:14:45.455188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.335 [2024-11-19 13:14:45.455191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.455194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.335 [2024-11-19 13:14:45.455202] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.455206] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.455209] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.335 [2024-11-19 13:14:45.455214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.335 [2024-11-19 13:14:45.455224] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.335 [2024-11-19 13:14:45.455324] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.335 [2024-11-19 13:14:45.455329] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.335 [2024-11-19 13:14:45.455332] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.455335] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.335 [2024-11-19 13:14:45.455344] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.455347] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.455350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.335 [2024-11-19 13:14:45.455356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.335 [2024-11-19 13:14:45.455365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.335 [2024-11-19 13:14:45.455475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.335 [2024-11-19 13:14:45.455481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.335 [2024-11-19 13:14:45.455484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.455487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.335 [2024-11-19 13:14:45.455495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.455499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.455502] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.335 [2024-11-19 13:14:45.455507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.335 [2024-11-19 13:14:45.455516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.335 [2024-11-19 13:14:45.455627] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.335 [2024-11-19 13:14:45.455632] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.335 [2024-11-19 13:14:45.455635] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.455638] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.335 [2024-11-19 13:14:45.455646] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.455650] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.455653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.335 [2024-11-19 13:14:45.455658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.335 [2024-11-19 13:14:45.455667] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.335 [2024-11-19 13:14:45.455747] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.335 [2024-11-19 13:14:45.455752] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.335 [2024-11-19 13:14:45.455755] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.455758] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.335 [2024-11-19 13:14:45.455767] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.455771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.455774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.335 [2024-11-19 13:14:45.455780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.335 [2024-11-19 13:14:45.455789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.335 [2024-11-19 13:14:45.455880] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.335 [2024-11-19 13:14:45.455886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.335 [2024-11-19 13:14:45.455888] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.455892] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.335 [2024-11-19 13:14:45.455900] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.455903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.455906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.335 [2024-11-19 13:14:45.455912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.335 [2024-11-19 13:14:45.455921] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.335 [2024-11-19 13:14:45.456031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.335 [2024-11-19 13:14:45.456037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.335 [2024-11-19 13:14:45.456040] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.335 [2024-11-19 13:14:45.456043] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.335 [2024-11-19 13:14:45.456052] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.456055] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.456058] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.336 [2024-11-19 13:14:45.456064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.336 [2024-11-19 13:14:45.456073] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.336 [2024-11-19 13:14:45.456182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.336 [2024-11-19 13:14:45.456187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.336 [2024-11-19 13:14:45.456190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.456194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.336 [2024-11-19 13:14:45.456202] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.456205] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.456208] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.336 [2024-11-19 13:14:45.456214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.336 [2024-11-19 13:14:45.456223] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.336 [2024-11-19 13:14:45.456289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.336 [2024-11-19 13:14:45.456296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.336 [2024-11-19 13:14:45.456299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.456303] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.336 [2024-11-19 13:14:45.456311] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.456315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.456318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.336 [2024-11-19 13:14:45.456323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.336 [2024-11-19 13:14:45.456333] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.336 [2024-11-19 13:14:45.456434] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.336 [2024-11-19 13:14:45.456439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.336 [2024-11-19 13:14:45.456442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.456446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.336 [2024-11-19 13:14:45.456454] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.456457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.456460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.336 [2024-11-19 13:14:45.456466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.336 [2024-11-19 13:14:45.456475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.336 [2024-11-19 13:14:45.456585] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.336 [2024-11-19 13:14:45.456591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.336 [2024-11-19 13:14:45.456594] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.456597] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.336 [2024-11-19 13:14:45.456605] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.456609] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.456612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.336 [2024-11-19 13:14:45.456617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.336 [2024-11-19 13:14:45.456626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.336 [2024-11-19 13:14:45.456735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.336 [2024-11-19 13:14:45.456741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.336 [2024-11-19 13:14:45.456743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.456747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.336 [2024-11-19 13:14:45.456755] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.456758] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.456761] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.336 [2024-11-19 13:14:45.456767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.336 [2024-11-19 13:14:45.456776] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.336 [2024-11-19 13:14:45.456837] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.336 [2024-11-19 13:14:45.456843] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.336 [2024-11-19 13:14:45.456848] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.456851] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.336 [2024-11-19 13:14:45.456859] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.456862] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.456865] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.336 [2024-11-19 13:14:45.456871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.336 [2024-11-19 13:14:45.456880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.336 [2024-11-19 13:14:45.460956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.336 [2024-11-19 13:14:45.460964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.336 [2024-11-19 13:14:45.460967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.460971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.336 [2024-11-19 13:14:45.460980] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.460984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.460987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84f690) 00:22:42.336 [2024-11-19 13:14:45.460992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.336 [2024-11-19 13:14:45.461003] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8b1580, cid 3, qid 0 00:22:42.336 [2024-11-19 13:14:45.461189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.336 [2024-11-19 13:14:45.461194] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.336 [2024-11-19 13:14:45.461197] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.461201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8b1580) on tqpair=0x84f690 00:22:42.336 [2024-11-19 13:14:45.461207] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:22:42.336 00:22:42.336 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:42.336 [2024-11-19 13:14:45.498778] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:22:42.336 [2024-11-19 13:14:45.498818] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2925798 ] 00:22:42.336 [2024-11-19 13:14:45.539600] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:42.336 [2024-11-19 13:14:45.539644] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:42.336 [2024-11-19 13:14:45.539649] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:42.336 [2024-11-19 13:14:45.539660] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:42.336 [2024-11-19 13:14:45.539669] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:42.336 [2024-11-19 13:14:45.543121] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:42.336 [2024-11-19 13:14:45.543145] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d58690 0 00:22:42.336 [2024-11-19 13:14:45.550957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:42.336 [2024-11-19 13:14:45.550968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:42.336 [2024-11-19 13:14:45.550972] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:42.336 [2024-11-19 13:14:45.550975] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:42.336 [2024-11-19 13:14:45.551001] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.551006] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.551009] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d58690) 00:22:42.336 [2024-11-19 13:14:45.551019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:42.336 [2024-11-19 13:14:45.551035] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba100, cid 0, qid 0 00:22:42.336 [2024-11-19 13:14:45.558958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.336 [2024-11-19 13:14:45.558976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.336 [2024-11-19 13:14:45.558979] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.336 [2024-11-19 13:14:45.558983] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba100) on tqpair=0x1d58690 00:22:42.336 [2024-11-19 13:14:45.558992] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:42.336 [2024-11-19 13:14:45.558998] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:42.336 [2024-11-19 13:14:45.559003] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:42.336 [2024-11-19 13:14:45.559014] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.559018] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.559021] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d58690) 00:22:42.337 [2024-11-19 13:14:45.559028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.337 [2024-11-19 13:14:45.559041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba100, cid 0, qid 0 00:22:42.337 [2024-11-19 13:14:45.559193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.337 [2024-11-19 13:14:45.559198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.337 [2024-11-19 13:14:45.559202] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.559205] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba100) on tqpair=0x1d58690 00:22:42.337 [2024-11-19 13:14:45.559209] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:42.337 [2024-11-19 13:14:45.559216] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:42.337 [2024-11-19 13:14:45.559222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.559226] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.559229] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d58690) 00:22:42.337 [2024-11-19 13:14:45.559235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.337 [2024-11-19 13:14:45.559245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba100, cid 0, qid 0 00:22:42.337 [2024-11-19 13:14:45.559307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.337 [2024-11-19 13:14:45.559313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.337 [2024-11-19 13:14:45.559316] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.559319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba100) on tqpair=0x1d58690 00:22:42.337 [2024-11-19 13:14:45.559326] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:42.337 [2024-11-19 13:14:45.559333] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:42.337 [2024-11-19 13:14:45.559339] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.559342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.559345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d58690) 00:22:42.337 [2024-11-19 13:14:45.559351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.337 [2024-11-19 13:14:45.559361] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba100, cid 0, qid 0 00:22:42.337 [2024-11-19 13:14:45.559426] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.337 [2024-11-19 13:14:45.559432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.337 [2024-11-19 13:14:45.559435] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.559438] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba100) on tqpair=0x1d58690 00:22:42.337 [2024-11-19 13:14:45.559442] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:42.337 [2024-11-19 13:14:45.559450] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.559454] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.559457] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d58690) 00:22:42.337 [2024-11-19 13:14:45.559463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.337 [2024-11-19 13:14:45.559473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba100, cid 0, qid 0 00:22:42.337 [2024-11-19 13:14:45.559534] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.337 [2024-11-19 13:14:45.559540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.337 [2024-11-19 13:14:45.559543] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.559546] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba100) on tqpair=0x1d58690 00:22:42.337 [2024-11-19 13:14:45.559550] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:42.337 [2024-11-19 13:14:45.559554] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:42.337 [2024-11-19 13:14:45.559561] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:42.337 [2024-11-19 13:14:45.559668] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:42.337 [2024-11-19 13:14:45.559673] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:42.337 [2024-11-19 13:14:45.559680] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.559683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.559686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d58690) 00:22:42.337 [2024-11-19 13:14:45.559692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.337 [2024-11-19 13:14:45.559702] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba100, cid 0, qid 0 00:22:42.337 [2024-11-19 13:14:45.559763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.337 [2024-11-19 13:14:45.559771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.337 [2024-11-19 13:14:45.559774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.559778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba100) on tqpair=0x1d58690 00:22:42.337 [2024-11-19 13:14:45.559782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:42.337 [2024-11-19 13:14:45.559790] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.559794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.559797] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d58690) 00:22:42.337 [2024-11-19 13:14:45.559803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.337 [2024-11-19 13:14:45.559812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba100, cid 0, qid 0 00:22:42.337 [2024-11-19 13:14:45.559876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.337 [2024-11-19 13:14:45.559882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.337 [2024-11-19 13:14:45.559885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.559888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba100) on tqpair=0x1d58690 00:22:42.337 [2024-11-19 13:14:45.559892] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:42.337 [2024-11-19 13:14:45.559897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:42.337 [2024-11-19 13:14:45.559903] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:42.337 [2024-11-19 13:14:45.559912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:42.337 [2024-11-19 13:14:45.559919] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.559923] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d58690) 00:22:42.337 [2024-11-19 13:14:45.559929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.337 [2024-11-19 13:14:45.559939] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba100, cid 0, qid 0 00:22:42.337 [2024-11-19 13:14:45.560028] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:42.337 [2024-11-19 13:14:45.560034] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:42.337 [2024-11-19 13:14:45.560038] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.560041] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d58690): datao=0, datal=4096, cccid=0 00:22:42.337 [2024-11-19 13:14:45.560045] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dba100) on tqpair(0x1d58690): expected_datao=0, payload_size=4096 00:22:42.337 [2024-11-19 13:14:45.560049] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.560062] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.560067] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.601955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.337 [2024-11-19 13:14:45.601965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.337 [2024-11-19 13:14:45.601968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.601972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba100) on tqpair=0x1d58690 00:22:42.337 [2024-11-19 13:14:45.601978] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:42.337 [2024-11-19 13:14:45.601985] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:42.337 [2024-11-19 13:14:45.601990] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:42.337 [2024-11-19 13:14:45.601996] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:42.337 [2024-11-19 13:14:45.602000] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:42.337 [2024-11-19 13:14:45.602005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:42.337 [2024-11-19 13:14:45.602014] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:42.337 [2024-11-19 13:14:45.602021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.602025] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.337 [2024-11-19 13:14:45.602028] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d58690) 00:22:42.337 [2024-11-19 13:14:45.602035] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:42.337 [2024-11-19 13:14:45.602048] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba100, cid 0, qid 0 00:22:42.337 [2024-11-19 13:14:45.602193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.337 [2024-11-19 13:14:45.602198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.337 [2024-11-19 13:14:45.602201] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.602205] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba100) on tqpair=0x1d58690 00:22:42.338 [2024-11-19 13:14:45.602210] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.602213] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.602217] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d58690) 00:22:42.338 [2024-11-19 13:14:45.602222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.338 [2024-11-19 13:14:45.602227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.602231] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.602234] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d58690) 00:22:42.338 [2024-11-19 13:14:45.602239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.338 [2024-11-19 13:14:45.602244] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.602247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.602250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d58690) 00:22:42.338 [2024-11-19 13:14:45.602255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.338 [2024-11-19 13:14:45.602260] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.602264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.602267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d58690) 00:22:42.338 [2024-11-19 13:14:45.602272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.338 [2024-11-19 13:14:45.602276] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:42.338 [2024-11-19 13:14:45.602284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:42.338 [2024-11-19 13:14:45.602291] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.602295] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d58690) 00:22:42.338 [2024-11-19 13:14:45.602300] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.338 [2024-11-19 13:14:45.602312] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba100, cid 0, qid 0 00:22:42.338 [2024-11-19 13:14:45.602317] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba280, cid 1, qid 0 00:22:42.338 [2024-11-19 13:14:45.602321] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba400, cid 2, qid 0 00:22:42.338 [2024-11-19 13:14:45.602325] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba580, cid 3, qid 0 00:22:42.338 [2024-11-19 13:14:45.602329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba700, cid 4, qid 0 00:22:42.338 [2024-11-19 13:14:45.602427] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.338 [2024-11-19 13:14:45.602432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.338 [2024-11-19 13:14:45.602435] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.602439] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba700) on tqpair=0x1d58690 00:22:42.338 [2024-11-19 13:14:45.602445] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:42.338 [2024-11-19 13:14:45.602449] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:42.338 [2024-11-19 13:14:45.602457] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:42.338 [2024-11-19 13:14:45.602462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:42.338 [2024-11-19 13:14:45.602468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.602471] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.602474] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d58690) 00:22:42.338 [2024-11-19 13:14:45.602480] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:42.338 [2024-11-19 13:14:45.602490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba700, cid 4, qid 0 00:22:42.338 [2024-11-19 13:14:45.602557] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.338 [2024-11-19 13:14:45.602563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.338 [2024-11-19 13:14:45.602566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.602569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba700) on tqpair=0x1d58690 00:22:42.338 [2024-11-19 13:14:45.602621] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:42.338 [2024-11-19 13:14:45.602631] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:42.338 [2024-11-19 13:14:45.602638] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.602641] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d58690) 00:22:42.338 [2024-11-19 13:14:45.602647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.338 [2024-11-19 13:14:45.602657] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba700, cid 4, qid 0 00:22:42.338 [2024-11-19 13:14:45.602734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:42.338 [2024-11-19 13:14:45.602742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:42.338 [2024-11-19 13:14:45.602745] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.602749] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d58690): datao=0, datal=4096, cccid=4 00:22:42.338 [2024-11-19 13:14:45.602753] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dba700) on tqpair(0x1d58690): expected_datao=0, payload_size=4096 00:22:42.338 [2024-11-19 13:14:45.602756] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.602762] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.602766] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.643955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.338 [2024-11-19 13:14:45.643966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.338 [2024-11-19 13:14:45.643969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.643973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba700) on tqpair=0x1d58690 00:22:42.338 [2024-11-19 13:14:45.643981] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:42.338 [2024-11-19 13:14:45.643994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:42.338 [2024-11-19 13:14:45.644004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:42.338 [2024-11-19 13:14:45.644011] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.644015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d58690) 00:22:42.338 [2024-11-19 13:14:45.644022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.338 [2024-11-19 13:14:45.644034] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba700, cid 4, qid 0 00:22:42.338 [2024-11-19 13:14:45.644142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:42.338 [2024-11-19 13:14:45.644148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:42.338 [2024-11-19 13:14:45.644151] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.644154] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d58690): datao=0, datal=4096, cccid=4 00:22:42.338 [2024-11-19 13:14:45.644158] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dba700) on tqpair(0x1d58690): expected_datao=0, payload_size=4096 00:22:42.338 [2024-11-19 13:14:45.644162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.644177] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.644181] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.685080] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.338 [2024-11-19 13:14:45.685089] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.338 [2024-11-19 13:14:45.685092] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.685095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba700) on tqpair=0x1d58690 00:22:42.338 [2024-11-19 13:14:45.685108] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:42.338 [2024-11-19 13:14:45.685117] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:42.338 [2024-11-19 13:14:45.685124] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.338 [2024-11-19 13:14:45.685128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d58690) 00:22:42.339 [2024-11-19 13:14:45.685137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.339 [2024-11-19 13:14:45.685149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba700, cid 4, qid 0 00:22:42.339 [2024-11-19 13:14:45.685224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:42.339 [2024-11-19 13:14:45.685230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:42.339 [2024-11-19 13:14:45.685234] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:42.339 [2024-11-19 13:14:45.685237] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d58690): datao=0, datal=4096, cccid=4 00:22:42.339 [2024-11-19 13:14:45.685241] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dba700) on tqpair(0x1d58690): expected_datao=0, payload_size=4096 00:22:42.339 [2024-11-19 13:14:45.685245] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.339 [2024-11-19 13:14:45.685257] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:42.339 [2024-11-19 13:14:45.685261] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:42.599 [2024-11-19 13:14:45.726103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.599 [2024-11-19 13:14:45.726115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.599 [2024-11-19 13:14:45.726119] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.599 [2024-11-19 13:14:45.726123] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba700) on tqpair=0x1d58690 00:22:42.599 [2024-11-19 13:14:45.726131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:42.599 [2024-11-19 13:14:45.726140] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:42.599 [2024-11-19 13:14:45.726149] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:42.599 [2024-11-19 13:14:45.726154] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:42.599 [2024-11-19 13:14:45.726159] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:42.599 [2024-11-19 13:14:45.726164] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:42.599 [2024-11-19 13:14:45.726168] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:42.599 [2024-11-19 13:14:45.726173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:42.599 [2024-11-19 13:14:45.726177] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:42.599 [2024-11-19 13:14:45.726191] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.599 [2024-11-19 13:14:45.726195] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d58690) 00:22:42.599 [2024-11-19 13:14:45.726203] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.599 [2024-11-19 13:14:45.726209] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.599 [2024-11-19 13:14:45.726212] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.599 [2024-11-19 13:14:45.726215] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d58690) 00:22:42.599 [2024-11-19 13:14:45.726221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.599 [2024-11-19 13:14:45.726235] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba700, cid 4, qid 0 00:22:42.599 [2024-11-19 13:14:45.726241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba880, cid 5, qid 0 00:22:42.599 [2024-11-19 13:14:45.726318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.599 [2024-11-19 13:14:45.726324] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.600 [2024-11-19 13:14:45.726327] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.726331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba700) on tqpair=0x1d58690 00:22:42.600 [2024-11-19 13:14:45.726336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.600 [2024-11-19 13:14:45.726341] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.600 [2024-11-19 13:14:45.726345] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.726348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba880) on tqpair=0x1d58690 00:22:42.600 [2024-11-19 13:14:45.726356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.726360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d58690) 00:22:42.600 [2024-11-19 13:14:45.726365] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-11-19 13:14:45.726375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba880, cid 5, qid 0 00:22:42.600 [2024-11-19 13:14:45.726440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.600 [2024-11-19 13:14:45.726446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.600 [2024-11-19 13:14:45.726449] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.726453] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba880) on tqpair=0x1d58690 00:22:42.600 [2024-11-19 13:14:45.726460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.726464] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d58690) 00:22:42.600 [2024-11-19 13:14:45.726470] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-11-19 13:14:45.726479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba880, cid 5, qid 0 00:22:42.600 [2024-11-19 13:14:45.726540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.600 [2024-11-19 13:14:45.726545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.600 [2024-11-19 13:14:45.726548] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.726552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba880) on tqpair=0x1d58690 00:22:42.600 [2024-11-19 13:14:45.726560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.726563] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d58690) 00:22:42.600 [2024-11-19 13:14:45.726569] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-11-19 13:14:45.726578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba880, cid 5, qid 0 00:22:42.600 [2024-11-19 13:14:45.726638] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.600 [2024-11-19 13:14:45.726643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.600 [2024-11-19 13:14:45.726646] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.726650] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba880) on tqpair=0x1d58690 00:22:42.600 [2024-11-19 13:14:45.726663] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.726667] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d58690) 00:22:42.600 [2024-11-19 13:14:45.726673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-11-19 13:14:45.726683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.726686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d58690) 00:22:42.600 [2024-11-19 13:14:45.726692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-11-19 13:14:45.726698] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.726702] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1d58690) 00:22:42.600 [2024-11-19 13:14:45.726707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-11-19 13:14:45.726714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.726717] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d58690) 00:22:42.600 [2024-11-19 13:14:45.726723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-11-19 13:14:45.726734] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba880, cid 5, qid 0 00:22:42.600 [2024-11-19 13:14:45.726739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba700, cid 4, qid 0 00:22:42.600 [2024-11-19 13:14:45.726743] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbaa00, cid 6, qid 0 00:22:42.600 [2024-11-19 13:14:45.726747] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbab80, cid 7, qid 0 00:22:42.600 [2024-11-19 13:14:45.726887] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:42.600 [2024-11-19 13:14:45.726893] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:42.600 [2024-11-19 13:14:45.726896] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.726900] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d58690): datao=0, datal=8192, cccid=5 00:22:42.600 [2024-11-19 13:14:45.726904] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dba880) on tqpair(0x1d58690): expected_datao=0, payload_size=8192 00:22:42.600 [2024-11-19 13:14:45.726908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.726926] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.726930] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.726938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:42.600 [2024-11-19 13:14:45.726943] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:42.600 [2024-11-19 13:14:45.726946] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.726955] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d58690): datao=0, datal=512, cccid=4 00:22:42.600 [2024-11-19 13:14:45.726959] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dba700) on tqpair(0x1d58690): expected_datao=0, payload_size=512 00:22:42.600 [2024-11-19 13:14:45.726963] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.726968] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.726971] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.726976] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:42.600 [2024-11-19 13:14:45.726981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:42.600 [2024-11-19 13:14:45.726984] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.726987] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d58690): datao=0, datal=512, cccid=6 00:22:42.600 [2024-11-19 13:14:45.726991] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dbaa00) on tqpair(0x1d58690): expected_datao=0, payload_size=512 00:22:42.600 [2024-11-19 13:14:45.726995] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.727002] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.727005] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.727010] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:42.600 [2024-11-19 13:14:45.727015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:42.600 [2024-11-19 13:14:45.727018] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.727021] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d58690): datao=0, datal=4096, cccid=7 00:22:42.600 [2024-11-19 13:14:45.727025] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dbab80) on tqpair(0x1d58690): expected_datao=0, payload_size=4096 00:22:42.600 [2024-11-19 13:14:45.727029] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.727035] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.727038] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.727045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.600 [2024-11-19 13:14:45.727050] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.600 [2024-11-19 13:14:45.727053] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.727057] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba880) on tqpair=0x1d58690 00:22:42.600 [2024-11-19 13:14:45.727067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.600 [2024-11-19 13:14:45.727072] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.600 [2024-11-19 13:14:45.727075] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.727079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba700) on tqpair=0x1d58690 00:22:42.600 [2024-11-19 13:14:45.727087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.600 [2024-11-19 13:14:45.727092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.600 [2024-11-19 13:14:45.727095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.727098] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dbaa00) on tqpair=0x1d58690 00:22:42.600 [2024-11-19 13:14:45.727104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.600 [2024-11-19 13:14:45.727110] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.600 [2024-11-19 13:14:45.727113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.600 [2024-11-19 13:14:45.727116] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dbab80) on tqpair=0x1d58690 00:22:42.600 ===================================================== 00:22:42.600 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:42.600 ===================================================== 00:22:42.600 Controller Capabilities/Features 00:22:42.600 ================================ 00:22:42.600 Vendor ID: 8086 00:22:42.600 Subsystem Vendor ID: 8086 00:22:42.600 Serial Number: SPDK00000000000001 00:22:42.600 Model Number: SPDK bdev Controller 00:22:42.600 Firmware Version: 25.01 00:22:42.600 Recommended Arb Burst: 6 00:22:42.600 IEEE OUI Identifier: e4 d2 5c 00:22:42.600 Multi-path I/O 00:22:42.600 May have multiple subsystem ports: Yes 00:22:42.600 May have multiple controllers: Yes 00:22:42.600 Associated with SR-IOV VF: No 00:22:42.600 Max Data Transfer Size: 131072 00:22:42.600 Max Number of Namespaces: 32 00:22:42.600 Max Number of I/O Queues: 127 00:22:42.600 NVMe Specification Version (VS): 1.3 00:22:42.601 NVMe Specification Version (Identify): 1.3 00:22:42.601 Maximum Queue Entries: 128 00:22:42.601 Contiguous Queues Required: Yes 00:22:42.601 Arbitration Mechanisms Supported 00:22:42.601 Weighted Round Robin: Not Supported 00:22:42.601 Vendor Specific: Not Supported 00:22:42.601 Reset Timeout: 15000 ms 00:22:42.601 Doorbell Stride: 4 bytes 00:22:42.601 NVM Subsystem Reset: Not Supported 00:22:42.601 Command Sets Supported 00:22:42.601 NVM Command Set: Supported 00:22:42.601 Boot Partition: Not Supported 00:22:42.601 Memory Page Size Minimum: 4096 bytes 00:22:42.601 Memory Page Size Maximum: 4096 bytes 00:22:42.601 Persistent Memory Region: Not Supported 00:22:42.601 Optional Asynchronous Events Supported 00:22:42.601 Namespace Attribute Notices: Supported 00:22:42.601 Firmware Activation Notices: Not Supported 00:22:42.601 ANA Change Notices: Not Supported 00:22:42.601 PLE Aggregate Log Change Notices: Not Supported 00:22:42.601 LBA Status Info Alert Notices: Not Supported 00:22:42.601 EGE Aggregate Log Change Notices: Not Supported 00:22:42.601 Normal NVM Subsystem Shutdown event: Not Supported 00:22:42.601 Zone Descriptor Change Notices: Not Supported 00:22:42.601 Discovery Log Change Notices: Not Supported 00:22:42.601 Controller Attributes 00:22:42.601 128-bit Host Identifier: Supported 00:22:42.601 Non-Operational Permissive Mode: Not Supported 00:22:42.601 NVM Sets: Not Supported 00:22:42.601 Read Recovery Levels: Not Supported 00:22:42.601 Endurance Groups: Not Supported 00:22:42.601 Predictable Latency Mode: Not Supported 00:22:42.601 Traffic Based Keep ALive: Not Supported 00:22:42.601 Namespace Granularity: Not Supported 00:22:42.601 SQ Associations: Not Supported 00:22:42.601 UUID List: Not Supported 00:22:42.601 Multi-Domain Subsystem: Not Supported 00:22:42.601 Fixed Capacity Management: Not Supported 00:22:42.601 Variable Capacity Management: Not Supported 00:22:42.601 Delete Endurance Group: Not Supported 00:22:42.601 Delete NVM Set: Not Supported 00:22:42.601 Extended LBA Formats Supported: Not Supported 00:22:42.601 Flexible Data Placement Supported: Not Supported 00:22:42.601 00:22:42.601 Controller Memory Buffer Support 00:22:42.601 ================================ 00:22:42.601 Supported: No 00:22:42.601 00:22:42.601 Persistent Memory Region Support 00:22:42.601 ================================ 00:22:42.601 Supported: No 00:22:42.601 00:22:42.601 Admin Command Set Attributes 00:22:42.601 ============================ 00:22:42.601 Security Send/Receive: Not Supported 00:22:42.601 Format NVM: Not Supported 00:22:42.601 Firmware Activate/Download: Not Supported 00:22:42.601 Namespace Management: Not Supported 00:22:42.601 Device Self-Test: Not Supported 00:22:42.601 Directives: Not Supported 00:22:42.601 NVMe-MI: Not Supported 00:22:42.601 Virtualization Management: Not Supported 00:22:42.601 Doorbell Buffer Config: Not Supported 00:22:42.601 Get LBA Status Capability: Not Supported 00:22:42.601 Command & Feature Lockdown Capability: Not Supported 00:22:42.601 Abort Command Limit: 4 00:22:42.601 Async Event Request Limit: 4 00:22:42.601 Number of Firmware Slots: N/A 00:22:42.601 Firmware Slot 1 Read-Only: N/A 00:22:42.601 Firmware Activation Without Reset: N/A 00:22:42.601 Multiple Update Detection Support: N/A 00:22:42.601 Firmware Update Granularity: No Information Provided 00:22:42.601 Per-Namespace SMART Log: No 00:22:42.601 Asymmetric Namespace Access Log Page: Not Supported 00:22:42.601 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:42.601 Command Effects Log Page: Supported 00:22:42.601 Get Log Page Extended Data: Supported 00:22:42.601 Telemetry Log Pages: Not Supported 00:22:42.601 Persistent Event Log Pages: Not Supported 00:22:42.601 Supported Log Pages Log Page: May Support 00:22:42.601 Commands Supported & Effects Log Page: Not Supported 00:22:42.601 Feature Identifiers & Effects Log Page:May Support 00:22:42.601 NVMe-MI Commands & Effects Log Page: May Support 00:22:42.601 Data Area 4 for Telemetry Log: Not Supported 00:22:42.601 Error Log Page Entries Supported: 128 00:22:42.601 Keep Alive: Supported 00:22:42.601 Keep Alive Granularity: 10000 ms 00:22:42.601 00:22:42.601 NVM Command Set Attributes 00:22:42.601 ========================== 00:22:42.601 Submission Queue Entry Size 00:22:42.601 Max: 64 00:22:42.601 Min: 64 00:22:42.601 Completion Queue Entry Size 00:22:42.601 Max: 16 00:22:42.601 Min: 16 00:22:42.601 Number of Namespaces: 32 00:22:42.601 Compare Command: Supported 00:22:42.601 Write Uncorrectable Command: Not Supported 00:22:42.601 Dataset Management Command: Supported 00:22:42.601 Write Zeroes Command: Supported 00:22:42.601 Set Features Save Field: Not Supported 00:22:42.601 Reservations: Supported 00:22:42.601 Timestamp: Not Supported 00:22:42.601 Copy: Supported 00:22:42.601 Volatile Write Cache: Present 00:22:42.601 Atomic Write Unit (Normal): 1 00:22:42.601 Atomic Write Unit (PFail): 1 00:22:42.601 Atomic Compare & Write Unit: 1 00:22:42.601 Fused Compare & Write: Supported 00:22:42.601 Scatter-Gather List 00:22:42.601 SGL Command Set: Supported 00:22:42.601 SGL Keyed: Supported 00:22:42.601 SGL Bit Bucket Descriptor: Not Supported 00:22:42.601 SGL Metadata Pointer: Not Supported 00:22:42.601 Oversized SGL: Not Supported 00:22:42.601 SGL Metadata Address: Not Supported 00:22:42.601 SGL Offset: Supported 00:22:42.601 Transport SGL Data Block: Not Supported 00:22:42.601 Replay Protected Memory Block: Not Supported 00:22:42.601 00:22:42.601 Firmware Slot Information 00:22:42.601 ========================= 00:22:42.601 Active slot: 1 00:22:42.601 Slot 1 Firmware Revision: 25.01 00:22:42.601 00:22:42.601 00:22:42.601 Commands Supported and Effects 00:22:42.601 ============================== 00:22:42.601 Admin Commands 00:22:42.601 -------------- 00:22:42.601 Get Log Page (02h): Supported 00:22:42.601 Identify (06h): Supported 00:22:42.601 Abort (08h): Supported 00:22:42.601 Set Features (09h): Supported 00:22:42.601 Get Features (0Ah): Supported 00:22:42.601 Asynchronous Event Request (0Ch): Supported 00:22:42.601 Keep Alive (18h): Supported 00:22:42.601 I/O Commands 00:22:42.601 ------------ 00:22:42.601 Flush (00h): Supported LBA-Change 00:22:42.601 Write (01h): Supported LBA-Change 00:22:42.601 Read (02h): Supported 00:22:42.601 Compare (05h): Supported 00:22:42.601 Write Zeroes (08h): Supported LBA-Change 00:22:42.601 Dataset Management (09h): Supported LBA-Change 00:22:42.601 Copy (19h): Supported LBA-Change 00:22:42.601 00:22:42.601 Error Log 00:22:42.601 ========= 00:22:42.601 00:22:42.601 Arbitration 00:22:42.601 =========== 00:22:42.601 Arbitration Burst: 1 00:22:42.601 00:22:42.601 Power Management 00:22:42.601 ================ 00:22:42.601 Number of Power States: 1 00:22:42.601 Current Power State: Power State #0 00:22:42.601 Power State #0: 00:22:42.601 Max Power: 0.00 W 00:22:42.601 Non-Operational State: Operational 00:22:42.601 Entry Latency: Not Reported 00:22:42.601 Exit Latency: Not Reported 00:22:42.601 Relative Read Throughput: 0 00:22:42.601 Relative Read Latency: 0 00:22:42.601 Relative Write Throughput: 0 00:22:42.601 Relative Write Latency: 0 00:22:42.601 Idle Power: Not Reported 00:22:42.601 Active Power: Not Reported 00:22:42.601 Non-Operational Permissive Mode: Not Supported 00:22:42.601 00:22:42.601 Health Information 00:22:42.601 ================== 00:22:42.601 Critical Warnings: 00:22:42.601 Available Spare Space: OK 00:22:42.601 Temperature: OK 00:22:42.601 Device Reliability: OK 00:22:42.601 Read Only: No 00:22:42.601 Volatile Memory Backup: OK 00:22:42.601 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:42.601 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:42.601 Available Spare: 0% 00:22:42.601 Available Spare Threshold: 0% 00:22:42.601 Life Percentage Used:[2024-11-19 13:14:45.727196] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.601 [2024-11-19 13:14:45.727201] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d58690) 00:22:42.601 [2024-11-19 13:14:45.727207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.601 [2024-11-19 13:14:45.727219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbab80, cid 7, qid 0 00:22:42.601 [2024-11-19 13:14:45.727292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.601 [2024-11-19 13:14:45.727298] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.601 [2024-11-19 13:14:45.727301] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.601 [2024-11-19 13:14:45.727305] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dbab80) on tqpair=0x1d58690 00:22:42.601 [2024-11-19 13:14:45.727331] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:42.602 [2024-11-19 13:14:45.727339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba100) on tqpair=0x1d58690 00:22:42.602 [2024-11-19 13:14:45.727345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-11-19 13:14:45.727350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba280) on tqpair=0x1d58690 00:22:42.602 [2024-11-19 13:14:45.727357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-11-19 13:14:45.727361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba400) on tqpair=0x1d58690 00:22:42.602 [2024-11-19 13:14:45.727365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-11-19 13:14:45.727370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba580) on tqpair=0x1d58690 00:22:42.602 [2024-11-19 13:14:45.727374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-11-19 13:14:45.727381] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.602 [2024-11-19 13:14:45.727384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.602 [2024-11-19 13:14:45.727387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d58690) 00:22:42.602 [2024-11-19 13:14:45.727394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-11-19 13:14:45.727405] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba580, cid 3, qid 0 00:22:42.602 [2024-11-19 13:14:45.727467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.602 [2024-11-19 13:14:45.727473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.602 [2024-11-19 13:14:45.727476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.602 [2024-11-19 13:14:45.727480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba580) on tqpair=0x1d58690 00:22:42.602 [2024-11-19 13:14:45.727485] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.602 [2024-11-19 13:14:45.727489] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.602 [2024-11-19 13:14:45.727492] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d58690) 00:22:42.602 [2024-11-19 13:14:45.727498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-11-19 13:14:45.727510] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba580, cid 3, qid 0 00:22:42.602 [2024-11-19 13:14:45.727589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.602 [2024-11-19 13:14:45.727595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.602 [2024-11-19 13:14:45.727598] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.602 [2024-11-19 13:14:45.727602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba580) on tqpair=0x1d58690 00:22:42.602 [2024-11-19 13:14:45.727606] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:42.602 [2024-11-19 13:14:45.727610] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:42.602 [2024-11-19 13:14:45.727618] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.602 [2024-11-19 13:14:45.727621] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.602 [2024-11-19 13:14:45.727625] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d58690) 00:22:42.602 [2024-11-19 13:14:45.727630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-11-19 13:14:45.727640] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba580, cid 3, qid 0 00:22:42.602 [2024-11-19 13:14:45.727703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.602 [2024-11-19 13:14:45.727709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.602 [2024-11-19 13:14:45.727712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.602 [2024-11-19 13:14:45.727716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba580) on tqpair=0x1d58690 00:22:42.602 [2024-11-19 13:14:45.727726] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.602 [2024-11-19 13:14:45.727730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.602 [2024-11-19 13:14:45.727733] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d58690) 00:22:42.602 [2024-11-19 13:14:45.727739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-11-19 13:14:45.727748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba580, cid 3, qid 0 00:22:42.602 [2024-11-19 13:14:45.727812] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.602 [2024-11-19 13:14:45.727818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.602 [2024-11-19 13:14:45.727821] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.602 [2024-11-19 13:14:45.727824] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba580) on tqpair=0x1d58690 00:22:42.602 [2024-11-19 13:14:45.727832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.602 [2024-11-19 13:14:45.727836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.602 [2024-11-19 13:14:45.727839] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d58690) 00:22:42.602 [2024-11-19 13:14:45.727845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-11-19 13:14:45.727855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba580, cid 3, qid 0 00:22:42.602 [2024-11-19 13:14:45.727923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.602 [2024-11-19 13:14:45.727928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.602 [2024-11-19 13:14:45.727931] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.602 [2024-11-19 13:14:45.727934] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba580) on tqpair=0x1d58690 00:22:42.602 [2024-11-19 13:14:45.727943] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:42.602 [2024-11-19 13:14:45.731952] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:42.602 [2024-11-19 13:14:45.731958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d58690) 00:22:42.602 [2024-11-19 13:14:45.731964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-11-19 13:14:45.731976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dba580, cid 3, qid 0 00:22:42.602 [2024-11-19 13:14:45.732109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:42.602 [2024-11-19 13:14:45.732115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:42.602 [2024-11-19 13:14:45.732118] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:42.602 [2024-11-19 13:14:45.732121] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dba580) on tqpair=0x1d58690 00:22:42.602 [2024-11-19 13:14:45.732128] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:22:42.602 0% 00:22:42.602 Data Units Read: 0 00:22:42.602 Data Units Written: 0 00:22:42.602 Host Read Commands: 0 00:22:42.602 Host Write Commands: 0 00:22:42.602 Controller Busy Time: 0 minutes 00:22:42.602 Power Cycles: 0 00:22:42.602 Power On Hours: 0 hours 00:22:42.602 Unsafe Shutdowns: 0 00:22:42.602 Unrecoverable Media Errors: 0 00:22:42.602 Lifetime Error Log Entries: 0 00:22:42.602 Warning Temperature Time: 0 minutes 00:22:42.602 Critical Temperature Time: 0 minutes 00:22:42.602 00:22:42.602 Number of Queues 00:22:42.602 ================ 00:22:42.602 Number of I/O Submission Queues: 127 00:22:42.602 Number of I/O Completion Queues: 127 00:22:42.602 00:22:42.602 Active Namespaces 00:22:42.602 ================= 00:22:42.602 Namespace ID:1 00:22:42.602 Error Recovery Timeout: Unlimited 00:22:42.602 Command Set Identifier: NVM (00h) 00:22:42.602 Deallocate: Supported 00:22:42.602 Deallocated/Unwritten Error: Not Supported 00:22:42.602 Deallocated Read Value: Unknown 00:22:42.602 Deallocate in Write Zeroes: Not Supported 00:22:42.602 Deallocated Guard Field: 0xFFFF 00:22:42.602 Flush: Supported 00:22:42.602 Reservation: Supported 00:22:42.602 Namespace Sharing Capabilities: Multiple Controllers 00:22:42.602 Size (in LBAs): 131072 (0GiB) 00:22:42.602 Capacity (in LBAs): 131072 (0GiB) 00:22:42.602 Utilization (in LBAs): 131072 (0GiB) 00:22:42.602 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:42.602 EUI64: ABCDEF0123456789 00:22:42.602 UUID: b9a2e5ee-4100-4e84-b93a-6f082718fe60 00:22:42.602 Thin Provisioning: Not Supported 00:22:42.602 Per-NS Atomic Units: Yes 00:22:42.602 Atomic Boundary Size (Normal): 0 00:22:42.602 Atomic Boundary Size (PFail): 0 00:22:42.602 Atomic Boundary Offset: 0 00:22:42.602 Maximum Single Source Range Length: 65535 00:22:42.602 Maximum Copy Length: 65535 00:22:42.602 Maximum Source Range Count: 1 00:22:42.602 NGUID/EUI64 Never Reused: No 00:22:42.602 Namespace Write Protected: No 00:22:42.602 Number of LBA Formats: 1 00:22:42.602 Current LBA Format: LBA Format #00 00:22:42.602 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:42.602 00:22:42.602 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:42.602 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:42.602 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.602 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.602 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.602 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:42.602 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:42.602 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:42.602 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:42.602 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:42.602 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:42.602 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:42.602 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:42.602 rmmod nvme_tcp 00:22:42.602 rmmod nvme_fabrics 00:22:42.603 rmmod nvme_keyring 00:22:42.603 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:42.603 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:42.603 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:42.603 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2925582 ']' 00:22:42.603 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2925582 00:22:42.603 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2925582 ']' 00:22:42.603 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2925582 00:22:42.603 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:42.603 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:42.603 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2925582 00:22:42.603 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:42.603 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:42.603 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2925582' 00:22:42.603 killing process with pid 2925582 00:22:42.603 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2925582 00:22:42.603 13:14:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2925582 00:22:42.862 13:14:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:42.862 13:14:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:42.862 13:14:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:42.862 13:14:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:42.862 13:14:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:42.862 13:14:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:42.862 13:14:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:42.862 13:14:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:42.862 13:14:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:42.862 13:14:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.862 13:14:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.862 13:14:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.767 13:14:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:44.767 00:22:44.767 real 0m9.423s 00:22:44.767 user 0m5.714s 00:22:44.767 sys 0m4.977s 00:22:44.767 13:14:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:44.767 13:14:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:44.767 ************************************ 00:22:44.767 END TEST nvmf_identify 00:22:44.767 ************************************ 00:22:45.026 13:14:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:45.026 13:14:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:45.026 13:14:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:45.026 13:14:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.026 ************************************ 00:22:45.026 START TEST nvmf_perf 00:22:45.026 ************************************ 00:22:45.026 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:45.026 * Looking for test storage... 00:22:45.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:45.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.027 --rc genhtml_branch_coverage=1 00:22:45.027 --rc genhtml_function_coverage=1 00:22:45.027 --rc genhtml_legend=1 00:22:45.027 --rc geninfo_all_blocks=1 00:22:45.027 --rc geninfo_unexecuted_blocks=1 00:22:45.027 00:22:45.027 ' 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:45.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.027 --rc genhtml_branch_coverage=1 00:22:45.027 --rc genhtml_function_coverage=1 00:22:45.027 --rc genhtml_legend=1 00:22:45.027 --rc geninfo_all_blocks=1 00:22:45.027 --rc geninfo_unexecuted_blocks=1 00:22:45.027 00:22:45.027 ' 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:45.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.027 --rc genhtml_branch_coverage=1 00:22:45.027 --rc genhtml_function_coverage=1 00:22:45.027 --rc genhtml_legend=1 00:22:45.027 --rc geninfo_all_blocks=1 00:22:45.027 --rc geninfo_unexecuted_blocks=1 00:22:45.027 00:22:45.027 ' 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:45.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.027 --rc genhtml_branch_coverage=1 00:22:45.027 --rc genhtml_function_coverage=1 00:22:45.027 --rc genhtml_legend=1 00:22:45.027 --rc geninfo_all_blocks=1 00:22:45.027 --rc geninfo_unexecuted_blocks=1 00:22:45.027 00:22:45.027 ' 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.027 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.286 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:45.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:45.287 13:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:51.858 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.858 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:51.858 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:51.858 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:51.858 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:51.858 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:51.858 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:51.858 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:51.858 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:51.858 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:51.858 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:51.858 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:51.858 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:51.858 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:51.858 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:51.859 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:51.859 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:51.859 Found net devices under 0000:86:00.0: cvl_0_0 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:51.859 Found net devices under 0000:86:00.1: cvl_0_1 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:51.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:22:51.859 00:22:51.859 --- 10.0.0.2 ping statistics --- 00:22:51.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.859 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:22:51.859 00:22:51.859 --- 10.0.0.1 ping statistics --- 00:22:51.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.859 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2929345 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2929345 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2929345 ']' 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.859 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.860 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.860 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:51.860 [2024-11-19 13:14:54.395115] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:22:51.860 [2024-11-19 13:14:54.395161] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.860 [2024-11-19 13:14:54.474542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:51.860 [2024-11-19 13:14:54.517033] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.860 [2024-11-19 13:14:54.517070] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.860 [2024-11-19 13:14:54.517077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.860 [2024-11-19 13:14:54.517083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.860 [2024-11-19 13:14:54.517088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.860 [2024-11-19 13:14:54.518654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.860 [2024-11-19 13:14:54.518768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.860 [2024-11-19 13:14:54.518875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.860 [2024-11-19 13:14:54.518876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.860 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.860 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:51.860 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:51.860 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:51.860 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:51.860 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.860 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:51.860 13:14:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:54.394 13:14:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:54.394 13:14:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:54.653 13:14:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:54.653 13:14:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:54.912 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:54.912 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:54.912 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:54.912 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:54.912 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:55.172 [2024-11-19 13:14:58.294046] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.172 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:55.172 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:55.172 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:55.431 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:55.431 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:55.690 13:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:55.949 [2024-11-19 13:14:59.082346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.949 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:55.949 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:55.949 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:55.949 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:55.949 13:14:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:57.327 Initializing NVMe Controllers 00:22:57.327 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:57.327 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:57.327 Initialization complete. Launching workers. 00:22:57.327 ======================================================== 00:22:57.327 Latency(us) 00:22:57.327 Device Information : IOPS MiB/s Average min max 00:22:57.327 PCIE (0000:5e:00.0) NSID 1 from core 0: 96646.07 377.52 330.55 39.58 4532.92 00:22:57.327 ======================================================== 00:22:57.327 Total : 96646.07 377.52 330.55 39.58 4532.92 00:22:57.327 00:22:57.327 13:15:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:58.705 Initializing NVMe Controllers 00:22:58.705 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:58.705 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:58.705 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:58.705 Initialization complete. Launching workers. 00:22:58.705 ======================================================== 00:22:58.705 Latency(us) 00:22:58.705 Device Information : IOPS MiB/s Average min max 00:22:58.705 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 69.89 0.27 14524.42 105.25 44773.98 00:22:58.705 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 64.90 0.25 15518.66 5984.11 54879.56 00:22:58.705 ======================================================== 00:22:58.705 Total : 134.79 0.53 15003.12 105.25 54879.56 00:22:58.705 00:22:58.705 13:15:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:00.083 Initializing NVMe Controllers 00:23:00.083 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:00.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:00.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:00.083 Initialization complete. Launching workers. 00:23:00.083 ======================================================== 00:23:00.083 Latency(us) 00:23:00.083 Device Information : IOPS MiB/s Average min max 00:23:00.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10877.90 42.49 2948.62 427.61 44760.78 00:23:00.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3858.22 15.07 8324.01 5423.65 16449.18 00:23:00.083 ======================================================== 00:23:00.083 Total : 14736.12 57.56 4356.01 427.61 44760.78 00:23:00.083 00:23:00.083 13:15:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:00.083 13:15:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:00.083 13:15:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:02.620 Initializing NVMe Controllers 00:23:02.620 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:02.620 Controller IO queue size 128, less than required. 00:23:02.621 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:02.621 Controller IO queue size 128, less than required. 00:23:02.621 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:02.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:02.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:02.621 Initialization complete. Launching workers. 00:23:02.621 ======================================================== 00:23:02.621 Latency(us) 00:23:02.621 Device Information : IOPS MiB/s Average min max 00:23:02.621 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1760.22 440.05 74056.69 42395.23 136131.32 00:23:02.621 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 590.23 147.56 230049.10 76504.59 359969.44 00:23:02.621 ======================================================== 00:23:02.621 Total : 2350.45 587.61 113228.63 42395.23 359969.44 00:23:02.621 00:23:02.621 13:15:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:02.879 No valid NVMe controllers or AIO or URING devices found 00:23:02.879 Initializing NVMe Controllers 00:23:02.879 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:02.879 Controller IO queue size 128, less than required. 00:23:02.879 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:02.879 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:02.879 Controller IO queue size 128, less than required. 00:23:02.879 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:02.879 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:02.879 WARNING: Some requested NVMe devices were skipped 00:23:02.879 13:15:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:05.414 Initializing NVMe Controllers 00:23:05.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:05.414 Controller IO queue size 128, less than required. 00:23:05.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:05.414 Controller IO queue size 128, less than required. 00:23:05.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:05.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:05.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:05.414 Initialization complete. Launching workers. 00:23:05.414 00:23:05.414 ==================== 00:23:05.414 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:05.414 TCP transport: 00:23:05.414 polls: 11523 00:23:05.414 idle_polls: 8369 00:23:05.414 sock_completions: 3154 00:23:05.414 nvme_completions: 6101 00:23:05.414 submitted_requests: 9286 00:23:05.414 queued_requests: 1 00:23:05.414 00:23:05.414 ==================== 00:23:05.414 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:05.414 TCP transport: 00:23:05.414 polls: 11292 00:23:05.414 idle_polls: 7443 00:23:05.414 sock_completions: 3849 00:23:05.414 nvme_completions: 6625 00:23:05.414 submitted_requests: 10014 00:23:05.414 queued_requests: 1 00:23:05.414 ======================================================== 00:23:05.414 Latency(us) 00:23:05.414 Device Information : IOPS MiB/s Average min max 00:23:05.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1524.10 381.03 86080.86 67147.03 156206.87 00:23:05.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1655.03 413.76 77237.39 48235.30 110894.56 00:23:05.414 ======================================================== 00:23:05.414 Total : 3179.13 794.78 81477.03 48235.30 156206.87 00:23:05.414 00:23:05.414 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:05.414 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:05.673 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:05.673 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:05.673 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:05.673 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:05.673 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:05.673 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:05.673 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:05.673 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:05.673 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:05.673 rmmod nvme_tcp 00:23:05.673 rmmod nvme_fabrics 00:23:05.673 rmmod nvme_keyring 00:23:05.673 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:05.673 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:05.673 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:05.673 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2929345 ']' 00:23:05.673 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2929345 00:23:05.673 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2929345 ']' 00:23:05.673 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2929345 00:23:05.673 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:05.673 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.673 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2929345 00:23:05.673 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:05.673 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:05.673 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2929345' 00:23:05.673 killing process with pid 2929345 00:23:05.673 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2929345 00:23:05.673 13:15:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2929345 00:23:07.061 13:15:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:07.061 13:15:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:07.061 13:15:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:07.061 13:15:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:07.061 13:15:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:07.061 13:15:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:07.061 13:15:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:07.061 13:15:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:07.061 13:15:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:07.061 13:15:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.061 13:15:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.061 13:15:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:09.601 00:23:09.601 real 0m24.268s 00:23:09.601 user 1m3.154s 00:23:09.601 sys 0m8.169s 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:09.601 ************************************ 00:23:09.601 END TEST nvmf_perf 00:23:09.601 ************************************ 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.601 ************************************ 00:23:09.601 START TEST nvmf_fio_host 00:23:09.601 ************************************ 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:09.601 * Looking for test storage... 00:23:09.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:09.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.601 --rc genhtml_branch_coverage=1 00:23:09.601 --rc genhtml_function_coverage=1 00:23:09.601 --rc genhtml_legend=1 00:23:09.601 --rc geninfo_all_blocks=1 00:23:09.601 --rc geninfo_unexecuted_blocks=1 00:23:09.601 00:23:09.601 ' 00:23:09.601 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:09.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.601 --rc genhtml_branch_coverage=1 00:23:09.601 --rc genhtml_function_coverage=1 00:23:09.601 --rc genhtml_legend=1 00:23:09.602 --rc geninfo_all_blocks=1 00:23:09.602 --rc geninfo_unexecuted_blocks=1 00:23:09.602 00:23:09.602 ' 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:09.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.602 --rc genhtml_branch_coverage=1 00:23:09.602 --rc genhtml_function_coverage=1 00:23:09.602 --rc genhtml_legend=1 00:23:09.602 --rc geninfo_all_blocks=1 00:23:09.602 --rc geninfo_unexecuted_blocks=1 00:23:09.602 00:23:09.602 ' 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:09.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.602 --rc genhtml_branch_coverage=1 00:23:09.602 --rc genhtml_function_coverage=1 00:23:09.602 --rc genhtml_legend=1 00:23:09.602 --rc geninfo_all_blocks=1 00:23:09.602 --rc geninfo_unexecuted_blocks=1 00:23:09.602 00:23:09.602 ' 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:09.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:09.602 13:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.178 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:16.179 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:16.179 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:16.179 Found net devices under 0000:86:00.0: cvl_0_0 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:16.179 Found net devices under 0000:86:00.1: cvl_0_1 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:16.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:16.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:23:16.179 00:23:16.179 --- 10.0.0.2 ping statistics --- 00:23:16.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.179 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:16.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:16.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:23:16.179 00:23:16.179 --- 10.0.0.1 ping statistics --- 00:23:16.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.179 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2935966 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2935966 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2935966 ']' 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.179 [2024-11-19 13:15:18.757655] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:23:16.179 [2024-11-19 13:15:18.757704] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.179 [2024-11-19 13:15:18.837015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:16.179 [2024-11-19 13:15:18.880109] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.179 [2024-11-19 13:15:18.880148] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.179 [2024-11-19 13:15:18.880155] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.179 [2024-11-19 13:15:18.880162] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.179 [2024-11-19 13:15:18.880167] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.179 [2024-11-19 13:15:18.881755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.179 [2024-11-19 13:15:18.881863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.179 [2024-11-19 13:15:18.882002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.179 [2024-11-19 13:15:18.882003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:23:16.179 13:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:16.179 [2024-11-19 13:15:19.147854] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.179 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:16.179 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:16.179 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.179 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:16.179 Malloc1 00:23:16.179 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:16.438 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:16.696 13:15:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:16.696 [2024-11-19 13:15:20.033414] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.696 13:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:16.955 13:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:16.955 13:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:16.955 13:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:16.955 13:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:16.955 13:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:16.955 13:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:16.955 13:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:16.955 13:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:16.955 13:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:16.955 13:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:16.955 13:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:16.955 13:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:16.955 13:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:16.955 13:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:16.955 13:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:16.955 13:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:16.955 13:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:16.955 13:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:16.955 13:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:16.955 13:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:16.955 13:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:16.955 13:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:16.955 13:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:17.213 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:17.213 fio-3.35 00:23:17.213 Starting 1 thread 00:23:19.748 00:23:19.748 test: (groupid=0, jobs=1): err= 0: pid=2936346: Tue Nov 19 13:15:22 2024 00:23:19.748 read: IOPS=11.6k, BW=45.3MiB/s (47.5MB/s)(90.8MiB/2005msec) 00:23:19.748 slat (nsec): min=1574, max=238914, avg=1734.29, stdev=2192.24 00:23:19.748 clat (usec): min=3172, max=11026, avg=6101.94, stdev=473.47 00:23:19.748 lat (usec): min=3203, max=11028, avg=6103.67, stdev=473.37 00:23:19.748 clat percentiles (usec): 00:23:19.748 | 1.00th=[ 4948], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:23:19.748 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6194], 00:23:19.748 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6849], 00:23:19.748 | 99.00th=[ 7111], 99.50th=[ 7242], 99.90th=[ 8586], 99.95th=[ 9896], 00:23:19.748 | 99.99th=[10290] 00:23:19.748 bw ( KiB/s): min=45392, max=46976, per=99.93%, avg=46318.00, stdev=674.67, samples=4 00:23:19.748 iops : min=11348, max=11744, avg=11579.50, stdev=168.67, samples=4 00:23:19.748 write: IOPS=11.5k, BW=44.9MiB/s (47.1MB/s)(90.1MiB/2005msec); 0 zone resets 00:23:19.748 slat (nsec): min=1606, max=224634, avg=1783.84, stdev=1649.18 00:23:19.748 clat (usec): min=2428, max=8989, avg=4935.23, stdev=381.62 00:23:19.748 lat (usec): min=2443, max=8991, avg=4937.01, stdev=381.54 00:23:19.748 clat percentiles (usec): 00:23:19.748 | 1.00th=[ 4047], 5.00th=[ 4359], 10.00th=[ 4490], 20.00th=[ 4621], 00:23:19.748 | 30.00th=[ 4752], 40.00th=[ 4817], 50.00th=[ 4948], 60.00th=[ 5014], 00:23:19.748 | 70.00th=[ 5145], 80.00th=[ 5211], 90.00th=[ 5407], 95.00th=[ 5538], 00:23:19.748 | 99.00th=[ 5735], 99.50th=[ 5866], 99.90th=[ 7635], 99.95th=[ 8717], 00:23:19.748 | 99.99th=[ 8979] 00:23:19.748 bw ( KiB/s): min=45712, max=46336, per=100.00%, avg=46036.00, stdev=313.91, samples=4 00:23:19.748 iops : min=11428, max=11584, avg=11509.00, stdev=78.48, samples=4 00:23:19.748 lat (msec) : 4=0.42%, 10=99.56%, 20=0.02% 00:23:19.748 cpu : usr=73.00%, sys=26.05%, ctx=112, majf=0, minf=3 00:23:19.748 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:19.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:19.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:19.748 issued rwts: total=23233,23071,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:19.748 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:19.748 00:23:19.748 Run status group 0 (all jobs): 00:23:19.748 READ: bw=45.3MiB/s (47.5MB/s), 45.3MiB/s-45.3MiB/s (47.5MB/s-47.5MB/s), io=90.8MiB (95.2MB), run=2005-2005msec 00:23:19.748 WRITE: bw=44.9MiB/s (47.1MB/s), 44.9MiB/s-44.9MiB/s (47.1MB/s-47.1MB/s), io=90.1MiB (94.5MB), run=2005-2005msec 00:23:19.748 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:19.748 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:19.748 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:19.748 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:19.748 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:19.748 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:19.748 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:19.748 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:19.748 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:19.748 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:19.748 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:19.748 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:19.748 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:19.748 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:19.748 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:19.748 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:19.748 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:19.748 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:19.748 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:19.748 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:19.748 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:19.748 13:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:20.007 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:20.007 fio-3.35 00:23:20.007 Starting 1 thread 00:23:22.541 00:23:22.541 test: (groupid=0, jobs=1): err= 0: pid=2936921: Tue Nov 19 13:15:25 2024 00:23:22.541 read: IOPS=10.8k, BW=168MiB/s (176MB/s)(337MiB/2006msec) 00:23:22.541 slat (nsec): min=2446, max=95290, avg=2847.20, stdev=1284.30 00:23:22.541 clat (usec): min=1150, max=13418, avg=6819.70, stdev=1546.73 00:23:22.541 lat (usec): min=1153, max=13432, avg=6822.55, stdev=1546.88 00:23:22.541 clat percentiles (usec): 00:23:22.541 | 1.00th=[ 3720], 5.00th=[ 4424], 10.00th=[ 4883], 20.00th=[ 5473], 00:23:22.541 | 30.00th=[ 5866], 40.00th=[ 6325], 50.00th=[ 6718], 60.00th=[ 7242], 00:23:22.541 | 70.00th=[ 7701], 80.00th=[ 8029], 90.00th=[ 8717], 95.00th=[ 9503], 00:23:22.541 | 99.00th=[10945], 99.50th=[11207], 99.90th=[12649], 99.95th=[13042], 00:23:22.541 | 99.99th=[13435] 00:23:22.541 bw ( KiB/s): min=78592, max=96480, per=50.88%, avg=87536.00, stdev=7382.81, samples=4 00:23:22.541 iops : min= 4912, max= 6030, avg=5471.00, stdev=461.43, samples=4 00:23:22.541 write: IOPS=6458, BW=101MiB/s (106MB/s)(179MiB/1775msec); 0 zone resets 00:23:22.541 slat (usec): min=28, max=379, avg=31.86, stdev= 7.03 00:23:22.541 clat (usec): min=3237, max=15285, avg=8846.06, stdev=1463.14 00:23:22.541 lat (usec): min=3268, max=15396, avg=8877.92, stdev=1464.66 00:23:22.541 clat percentiles (usec): 00:23:22.541 | 1.00th=[ 5866], 5.00th=[ 6652], 10.00th=[ 7111], 20.00th=[ 7635], 00:23:22.541 | 30.00th=[ 8029], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 9110], 00:23:22.541 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10814], 95.00th=[11469], 00:23:22.541 | 99.00th=[12780], 99.50th=[13304], 99.90th=[14746], 99.95th=[15008], 00:23:22.541 | 99.99th=[15139] 00:23:22.541 bw ( KiB/s): min=84096, max=100352, per=88.18%, avg=91112.00, stdev=6892.19, samples=4 00:23:22.541 iops : min= 5256, max= 6272, avg=5694.50, stdev=430.76, samples=4 00:23:22.541 lat (msec) : 2=0.01%, 4=1.25%, 10=89.77%, 20=8.97% 00:23:22.541 cpu : usr=84.45%, sys=14.86%, ctx=38, majf=0, minf=3 00:23:22.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:22.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:22.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:22.541 issued rwts: total=21571,11463,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:22.541 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:22.541 00:23:22.541 Run status group 0 (all jobs): 00:23:22.541 READ: bw=168MiB/s (176MB/s), 168MiB/s-168MiB/s (176MB/s-176MB/s), io=337MiB (353MB), run=2006-2006msec 00:23:22.541 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=179MiB (188MB), run=1775-1775msec 00:23:22.541 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:22.541 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:22.541 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:22.541 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:22.541 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:22.541 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:22.541 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:22.541 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:22.541 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:22.541 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:22.541 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:22.541 rmmod nvme_tcp 00:23:22.541 rmmod nvme_fabrics 00:23:22.541 rmmod nvme_keyring 00:23:22.541 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:22.541 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:22.542 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:22.542 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2935966 ']' 00:23:22.542 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2935966 00:23:22.542 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2935966 ']' 00:23:22.542 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2935966 00:23:22.542 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:23:22.542 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:22.542 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2935966 00:23:22.801 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:22.801 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:22.801 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2935966' 00:23:22.801 killing process with pid 2935966 00:23:22.801 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2935966 00:23:22.801 13:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2935966 00:23:22.801 13:15:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:22.801 13:15:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:22.801 13:15:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:22.801 13:15:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:22.801 13:15:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:23:22.801 13:15:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:22.801 13:15:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:22.801 13:15:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:22.801 13:15:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:22.801 13:15:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.801 13:15:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.801 13:15:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:25.341 00:23:25.341 real 0m15.627s 00:23:25.341 user 0m45.258s 00:23:25.341 sys 0m6.549s 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.341 ************************************ 00:23:25.341 END TEST nvmf_fio_host 00:23:25.341 ************************************ 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.341 ************************************ 00:23:25.341 START TEST nvmf_failover 00:23:25.341 ************************************ 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:25.341 * Looking for test storage... 00:23:25.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:25.341 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:25.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.342 --rc genhtml_branch_coverage=1 00:23:25.342 --rc genhtml_function_coverage=1 00:23:25.342 --rc genhtml_legend=1 00:23:25.342 --rc geninfo_all_blocks=1 00:23:25.342 --rc geninfo_unexecuted_blocks=1 00:23:25.342 00:23:25.342 ' 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:25.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.342 --rc genhtml_branch_coverage=1 00:23:25.342 --rc genhtml_function_coverage=1 00:23:25.342 --rc genhtml_legend=1 00:23:25.342 --rc geninfo_all_blocks=1 00:23:25.342 --rc geninfo_unexecuted_blocks=1 00:23:25.342 00:23:25.342 ' 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:25.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.342 --rc genhtml_branch_coverage=1 00:23:25.342 --rc genhtml_function_coverage=1 00:23:25.342 --rc genhtml_legend=1 00:23:25.342 --rc geninfo_all_blocks=1 00:23:25.342 --rc geninfo_unexecuted_blocks=1 00:23:25.342 00:23:25.342 ' 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:25.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.342 --rc genhtml_branch_coverage=1 00:23:25.342 --rc genhtml_function_coverage=1 00:23:25.342 --rc genhtml_legend=1 00:23:25.342 --rc geninfo_all_blocks=1 00:23:25.342 --rc geninfo_unexecuted_blocks=1 00:23:25.342 00:23:25.342 ' 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:25.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:25.342 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.343 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.343 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.343 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:25.343 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:25.343 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:25.343 13:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:32.024 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:32.024 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:32.024 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:32.024 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:32.025 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:32.025 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:32.025 Found net devices under 0000:86:00.0: cvl_0_0 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:32.025 Found net devices under 0000:86:00.1: cvl_0_1 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:32.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:23:32.025 00:23:32.025 --- 10.0.0.2 ping statistics --- 00:23:32.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.025 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:23:32.025 00:23:32.025 --- 10.0.0.1 ping statistics --- 00:23:32.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.025 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:32.025 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:32.026 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:32.026 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:32.026 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2940893 00:23:32.026 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:32.026 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2940893 00:23:32.026 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2940893 ']' 00:23:32.026 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.026 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.026 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.026 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.026 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:32.026 [2024-11-19 13:15:34.447097] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:23:32.026 [2024-11-19 13:15:34.447151] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.026 [2024-11-19 13:15:34.526011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:32.026 [2024-11-19 13:15:34.568334] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.026 [2024-11-19 13:15:34.568373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.026 [2024-11-19 13:15:34.568380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.026 [2024-11-19 13:15:34.568386] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.026 [2024-11-19 13:15:34.568392] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.026 [2024-11-19 13:15:34.569860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.026 [2024-11-19 13:15:34.569980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.026 [2024-11-19 13:15:34.569981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:32.026 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:32.026 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:32.026 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:32.026 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:32.026 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:32.026 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.026 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:32.026 [2024-11-19 13:15:34.874709] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.026 13:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:32.026 Malloc0 00:23:32.026 13:15:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:32.026 13:15:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:32.304 13:15:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:32.562 [2024-11-19 13:15:35.698591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.562 13:15:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:32.562 [2024-11-19 13:15:35.899131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:32.562 13:15:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:32.820 [2024-11-19 13:15:36.087727] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:32.820 13:15:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:32.820 13:15:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2941159 00:23:32.820 13:15:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:32.820 13:15:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2941159 /var/tmp/bdevperf.sock 00:23:32.820 13:15:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2941159 ']' 00:23:32.820 13:15:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.820 13:15:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.820 13:15:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.820 13:15:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.820 13:15:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:33.078 13:15:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.078 13:15:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:33.078 13:15:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:33.336 NVMe0n1 00:23:33.336 13:15:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:33.902 00:23:33.902 13:15:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2941388 00:23:33.902 13:15:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:33.902 13:15:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:34.838 13:15:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:35.097 [2024-11-19 13:15:38.295256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.097 [2024-11-19 13:15:38.295501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.098 [2024-11-19 13:15:38.295507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.098 [2024-11-19 13:15:38.295513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.098 [2024-11-19 13:15:38.295520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.098 [2024-11-19 13:15:38.295526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.098 [2024-11-19 13:15:38.295533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.098 [2024-11-19 13:15:38.295539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.098 [2024-11-19 13:15:38.295545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.098 [2024-11-19 13:15:38.295552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.098 [2024-11-19 13:15:38.295559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.098 [2024-11-19 13:15:38.295564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.098 [2024-11-19 13:15:38.295570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.098 [2024-11-19 13:15:38.295576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.098 [2024-11-19 13:15:38.295582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.098 [2024-11-19 13:15:38.295588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.098 [2024-11-19 13:15:38.295594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.098 [2024-11-19 13:15:38.295600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.098 [2024-11-19 13:15:38.295607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.098 [2024-11-19 13:15:38.295612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.098 [2024-11-19 13:15:38.295619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.098 [2024-11-19 13:15:38.295625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.098 [2024-11-19 13:15:38.295632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.098 [2024-11-19 13:15:38.295638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.098 [2024-11-19 13:15:38.295644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6b2d0 is same with the state(6) to be set 00:23:35.098 13:15:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:38.380 13:15:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:38.380 00:23:38.380 13:15:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:38.638 [2024-11-19 13:15:41.925967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6c060 is same with the state(6) to be set 00:23:38.638 [2024-11-19 13:15:41.926006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6c060 is same with the state(6) to be set 00:23:38.638 [2024-11-19 13:15:41.926013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6c060 is same with the state(6) to be set 00:23:38.638 [2024-11-19 13:15:41.926020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6c060 is same with the state(6) to be set 00:23:38.638 [2024-11-19 13:15:41.926026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6c060 is same with the state(6) to be set 00:23:38.638 [2024-11-19 13:15:41.926033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6c060 is same with the state(6) to be set 00:23:38.638 [2024-11-19 13:15:41.926045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6c060 is same with the state(6) to be set 00:23:38.638 [2024-11-19 13:15:41.926052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6c060 is same with the state(6) to be set 00:23:38.638 [2024-11-19 13:15:41.926058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6c060 is same with the state(6) to be set 00:23:38.638 13:15:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:41.923 13:15:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:41.923 [2024-11-19 13:15:45.135217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.923 13:15:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:43.012 13:15:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:43.012 [2024-11-19 13:15:46.350202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ce30 is same with the state(6) to be set 00:23:43.012 [2024-11-19 13:15:46.350235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ce30 is same with the state(6) to be set 00:23:43.012 [2024-11-19 13:15:46.350242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ce30 is same with the state(6) to be set 00:23:43.012 13:15:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2941388 00:23:49.588 { 00:23:49.588 "results": [ 00:23:49.588 { 00:23:49.588 "job": "NVMe0n1", 00:23:49.588 "core_mask": "0x1", 00:23:49.588 "workload": "verify", 00:23:49.588 "status": "finished", 00:23:49.588 "verify_range": { 00:23:49.588 "start": 0, 00:23:49.588 "length": 16384 00:23:49.588 }, 00:23:49.588 "queue_depth": 128, 00:23:49.588 "io_size": 4096, 00:23:49.588 "runtime": 15.010666, 00:23:49.588 "iops": 11034.553696684745, 00:23:49.588 "mibps": 43.103725377674785, 00:23:49.588 "io_failed": 3813, 00:23:49.588 "io_timeout": 0, 00:23:49.588 "avg_latency_us": 11316.579483158584, 00:23:49.589 "min_latency_us": 425.62782608695653, 00:23:49.589 "max_latency_us": 32824.98782608696 00:23:49.589 } 00:23:49.589 ], 00:23:49.589 "core_count": 1 00:23:49.589 } 00:23:49.589 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2941159 00:23:49.589 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2941159 ']' 00:23:49.589 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2941159 00:23:49.589 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:49.589 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.589 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2941159 00:23:49.589 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:49.589 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:49.589 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2941159' 00:23:49.589 killing process with pid 2941159 00:23:49.589 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2941159 00:23:49.589 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2941159 00:23:49.589 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:49.589 [2024-11-19 13:15:36.164767] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:23:49.589 [2024-11-19 13:15:36.164821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2941159 ] 00:23:49.589 [2024-11-19 13:15:36.242333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.589 [2024-11-19 13:15:36.284685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.589 Running I/O for 15 seconds... 00:23:49.589 11060.00 IOPS, 43.20 MiB/s [2024-11-19T12:15:52.966Z] [2024-11-19 13:15:38.296972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.589 [2024-11-19 13:15:38.297127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.589 [2024-11-19 13:15:38.297379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.589 [2024-11-19 13:15:38.297476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.589 [2024-11-19 13:15:38.297483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.297991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.297999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.298008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.298016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.298023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.298031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.298038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.298046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.298052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.298061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.298067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.298076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.590 [2024-11-19 13:15:38.298082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.590 [2024-11-19 13:15:38.298090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.591 [2024-11-19 13:15:38.298097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.591 [2024-11-19 13:15:38.298113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.591 [2024-11-19 13:15:38.298672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.591 [2024-11-19 13:15:38.298678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:38.298686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.592 [2024-11-19 13:15:38.298693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:38.298701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.592 [2024-11-19 13:15:38.298707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:38.298717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.592 [2024-11-19 13:15:38.298725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:38.298733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.592 [2024-11-19 13:15:38.298740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:38.298749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.592 [2024-11-19 13:15:38.298755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:38.298763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.592 [2024-11-19 13:15:38.298770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:38.298777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.592 [2024-11-19 13:15:38.298784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:38.298792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.592 [2024-11-19 13:15:38.298798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:38.298806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.592 [2024-11-19 13:15:38.298813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:38.298822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.592 [2024-11-19 13:15:38.298828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:38.298848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.592 [2024-11-19 13:15:38.298855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98136 len:8 PRP1 0x0 PRP2 0x0 00:23:49.592 [2024-11-19 13:15:38.298862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:38.298871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.592 [2024-11-19 13:15:38.298878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.592 [2024-11-19 13:15:38.298883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98144 len:8 PRP1 0x0 PRP2 0x0 00:23:49.592 [2024-11-19 13:15:38.298890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:38.298897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.592 [2024-11-19 13:15:38.298903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.592 [2024-11-19 13:15:38.298909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98152 len:8 PRP1 0x0 PRP2 0x0 00:23:49.592 [2024-11-19 13:15:38.298916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:38.298925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.592 [2024-11-19 13:15:38.298931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.592 [2024-11-19 13:15:38.298936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98160 len:8 PRP1 0x0 PRP2 0x0 00:23:49.592 [2024-11-19 13:15:38.298942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:38.298953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.592 [2024-11-19 13:15:38.298958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.592 [2024-11-19 13:15:38.298966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98168 len:8 PRP1 0x0 PRP2 0x0 00:23:49.592 [2024-11-19 13:15:38.298975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:38.298982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.592 [2024-11-19 13:15:38.298988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.592 [2024-11-19 13:15:38.298993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98176 len:8 PRP1 0x0 PRP2 0x0 00:23:49.592 [2024-11-19 13:15:38.299000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:38.299007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.592 [2024-11-19 13:15:38.299014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.592 [2024-11-19 13:15:38.299020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98184 len:8 PRP1 0x0 PRP2 0x0 00:23:49.592 [2024-11-19 13:15:38.299029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:38.299035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.592 [2024-11-19 13:15:38.299042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.592 [2024-11-19 13:15:38.299048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98192 len:8 PRP1 0x0 PRP2 0x0 00:23:49.592 [2024-11-19 13:15:38.299056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:38.299100] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:49.592 [2024-11-19 13:15:38.299121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.592 [2024-11-19 13:15:38.299129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:38.299136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.592 [2024-11-19 13:15:38.299143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:38.299150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.592 [2024-11-19 13:15:38.299156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:38.299163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.592 [2024-11-19 13:15:38.299170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:38.309899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:49.592 [2024-11-19 13:15:38.309956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x72d340 (9): Bad file descriptor 00:23:49.592 [2024-11-19 13:15:38.313847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:49.592 [2024-11-19 13:15:38.336421] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:49.592 10898.00 IOPS, 42.57 MiB/s [2024-11-19T12:15:52.969Z] 10963.33 IOPS, 42.83 MiB/s [2024-11-19T12:15:52.969Z] 11001.25 IOPS, 42.97 MiB/s [2024-11-19T12:15:52.969Z] [2024-11-19 13:15:41.926350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.592 [2024-11-19 13:15:41.926384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:41.926397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.592 [2024-11-19 13:15:41.926405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:41.926414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.592 [2024-11-19 13:15:41.926422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:41.926431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.592 [2024-11-19 13:15:41.926438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:41.926446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.592 [2024-11-19 13:15:41.926454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:41.926463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.592 [2024-11-19 13:15:41.926470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:41.926479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.592 [2024-11-19 13:15:41.926486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:41.926494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.592 [2024-11-19 13:15:41.926501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:41.926510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.592 [2024-11-19 13:15:41.926517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.592 [2024-11-19 13:15:41.926526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.592 [2024-11-19 13:15:41.926534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-11-19 13:15:41.926919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.593 [2024-11-19 13:15:41.926936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.593 [2024-11-19 13:15:41.926956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.593 [2024-11-19 13:15:41.926971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.593 [2024-11-19 13:15:41.926986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.926994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.593 [2024-11-19 13:15:41.927002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.927010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.593 [2024-11-19 13:15:41.927017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.927025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.593 [2024-11-19 13:15:41.927032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.927040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.593 [2024-11-19 13:15:41.927047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.593 [2024-11-19 13:15:41.927054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-11-19 13:15:41.927658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.594 [2024-11-19 13:15:41.927664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.927672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.927679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.927687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.927694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.927701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.927708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.927719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.927726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.927735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.927742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.927750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.927757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.927766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.927772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.927781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.927787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.927795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.927803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.927811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.927818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.927826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.927832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.927840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.927847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.927855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.927863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.927871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.927880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.927889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.927896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.927904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.927911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.927921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.927929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.927937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.927943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.927955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.927962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.927970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.927978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.927986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.927993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.928001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.928008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.928016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.928022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.928030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.928037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.928045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.928052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.928061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.928067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.928075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.928082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.928090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.928097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.928105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.928114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.928122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.928131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.928139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.928146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.928155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.595 [2024-11-19 13:15:41.928162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.928192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.595 [2024-11-19 13:15:41.928199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32256 len:8 PRP1 0x0 PRP2 0x0 00:23:49.595 [2024-11-19 13:15:41.928207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.928245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.595 [2024-11-19 13:15:41.928255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.928263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.595 [2024-11-19 13:15:41.928271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.928278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.595 [2024-11-19 13:15:41.928285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.928292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.595 [2024-11-19 13:15:41.928298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.595 [2024-11-19 13:15:41.928306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72d340 is same with the state(6) to be set 00:23:49.595 [2024-11-19 13:15:41.928499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.595 [2024-11-19 13:15:41.928506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.595 [2024-11-19 13:15:41.928513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32264 len:8 PRP1 0x0 PRP2 0x0 00:23:49.596 [2024-11-19 13:15:41.928520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.596 [2024-11-19 13:15:41.928528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.596 [2024-11-19 13:15:41.928533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.596 [2024-11-19 13:15:41.928539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32272 len:8 PRP1 0x0 PRP2 0x0 00:23:49.596 [2024-11-19 13:15:41.928546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.596 [2024-11-19 13:15:41.928555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.596 [2024-11-19 13:15:41.928560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.596 [2024-11-19 13:15:41.928566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32280 len:8 PRP1 0x0 PRP2 0x0 00:23:49.596 [2024-11-19 13:15:41.928572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.596 [2024-11-19 13:15:41.928579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.596 [2024-11-19 13:15:41.928584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.596 [2024-11-19 13:15:41.928590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32288 len:8 PRP1 0x0 PRP2 0x0 00:23:49.596 [2024-11-19 13:15:41.928599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.596 [2024-11-19 13:15:41.928605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.596 [2024-11-19 13:15:41.928611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.596 [2024-11-19 13:15:41.928616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32296 len:8 PRP1 0x0 PRP2 0x0 00:23:49.596 [2024-11-19 13:15:41.928623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.596 [2024-11-19 13:15:41.928630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.596 [2024-11-19 13:15:41.928635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.596 [2024-11-19 13:15:41.928640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32304 len:8 PRP1 0x0 PRP2 0x0 00:23:49.596 [2024-11-19 13:15:41.928647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.596 [2024-11-19 13:15:41.928654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.596 [2024-11-19 13:15:41.928659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.596 [2024-11-19 13:15:41.928664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32312 len:8 PRP1 0x0 PRP2 0x0 00:23:49.596 [2024-11-19 13:15:41.928671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.596 [2024-11-19 13:15:41.928677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.596 [2024-11-19 13:15:41.928682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.596 [2024-11-19 13:15:41.928688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32320 len:8 PRP1 0x0 PRP2 0x0 00:23:49.596 [2024-11-19 13:15:41.928694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.596 [2024-11-19 13:15:41.928701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.596 [2024-11-19 13:15:41.928706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.596 [2024-11-19 13:15:41.928712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32328 len:8 PRP1 0x0 PRP2 0x0 00:23:49.596 [2024-11-19 13:15:41.928718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.596 [2024-11-19 13:15:41.928725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.596 [2024-11-19 13:15:41.928730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.596 [2024-11-19 13:15:41.928735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32336 len:8 PRP1 0x0 PRP2 0x0 00:23:49.596 [2024-11-19 13:15:41.928743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.596 [2024-11-19 13:15:41.928750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.596 [2024-11-19 13:15:41.928755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.596 [2024-11-19 13:15:41.928761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32344 len:8 PRP1 0x0 PRP2 0x0 00:23:49.596 [2024-11-19 13:15:41.928768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.596 [2024-11-19 13:15:41.928775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.596 [2024-11-19 13:15:41.928779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.596 [2024-11-19 13:15:41.928785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32352 len:8 PRP1 0x0 PRP2 0x0 00:23:49.596 [2024-11-19 13:15:41.928793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.596 [2024-11-19 13:15:41.928799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.596 [2024-11-19 13:15:41.928804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.596 [2024-11-19 13:15:41.928809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31336 len:8 PRP1 0x0 PRP2 0x0 00:23:49.596 [2024-11-19 13:15:41.928817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.596 [2024-11-19 13:15:41.928825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.596 [2024-11-19 13:15:41.928830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.596 [2024-11-19 13:15:41.928836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31344 len:8 PRP1 0x0 PRP2 0x0 00:23:49.596 [2024-11-19 13:15:41.928842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.596 [2024-11-19 13:15:41.928848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.596 [2024-11-19 13:15:41.928853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.596 [2024-11-19 13:15:41.928859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31352 len:8 PRP1 0x0 PRP2 0x0 00:23:49.596 [2024-11-19 13:15:41.928865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.596 [2024-11-19 13:15:41.928872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.596 [2024-11-19 13:15:41.940197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.596 [2024-11-19 13:15:41.940210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31360 len:8 PRP1 0x0 PRP2 0x0 00:23:49.596 [2024-11-19 13:15:41.940221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.596 [2024-11-19 13:15:41.940231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.596 [2024-11-19 13:15:41.940238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.596 [2024-11-19 13:15:41.940246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31368 len:8 PRP1 0x0 PRP2 0x0 00:23:49.596 [2024-11-19 13:15:41.940255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.596 [2024-11-19 13:15:41.940264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.596 [2024-11-19 13:15:41.940271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.596 [2024-11-19 13:15:41.940281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31376 len:8 PRP1 0x0 PRP2 0x0 00:23:49.596 [2024-11-19 13:15:41.940289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.596 [2024-11-19 13:15:41.940299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.596 [2024-11-19 13:15:41.940306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.596 [2024-11-19 13:15:41.940314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31384 len:8 PRP1 0x0 PRP2 0x0 00:23:49.596 [2024-11-19 13:15:41.940322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.596 [2024-11-19 13:15:41.940331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.596 [2024-11-19 13:15:41.940337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.596 [2024-11-19 13:15:41.940344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31392 len:8 PRP1 0x0 PRP2 0x0 00:23:49.596 [2024-11-19 13:15:41.940356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.596 [2024-11-19 13:15:41.940365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.596 [2024-11-19 13:15:41.940372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.596 [2024-11-19 13:15:41.940380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31400 len:8 PRP1 0x0 PRP2 0x0 00:23:49.596 [2024-11-19 13:15:41.940388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.596 [2024-11-19 13:15:41.940397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.596 [2024-11-19 13:15:41.940405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.596 [2024-11-19 13:15:41.940413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31408 len:8 PRP1 0x0 PRP2 0x0 00:23:49.596 [2024-11-19 13:15:41.940422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.596 [2024-11-19 13:15:41.940432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.596 [2024-11-19 13:15:41.940438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.596 [2024-11-19 13:15:41.940446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31416 len:8 PRP1 0x0 PRP2 0x0 00:23:49.596 [2024-11-19 13:15:41.940455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.596 [2024-11-19 13:15:41.940464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.596 [2024-11-19 13:15:41.940471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.596 [2024-11-19 13:15:41.940478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31424 len:8 PRP1 0x0 PRP2 0x0 00:23:49.596 [2024-11-19 13:15:41.940489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.597 [2024-11-19 13:15:41.940497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.597 [2024-11-19 13:15:41.940504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.597 [2024-11-19 13:15:41.940512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31432 len:8 PRP1 0x0 PRP2 0x0 00:23:49.597 [2024-11-19 13:15:41.940521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.597 [2024-11-19 13:15:41.940531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.597 [2024-11-19 13:15:41.940542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.597 [2024-11-19 13:15:41.940550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31440 len:8 PRP1 0x0 PRP2 0x0 00:23:49.597 [2024-11-19 13:15:41.940558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.597 [2024-11-19 13:15:41.940568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.597 [2024-11-19 13:15:41.940575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.597 [2024-11-19 13:15:41.940583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31448 len:8 PRP1 0x0 PRP2 0x0 00:23:49.597 [2024-11-19 13:15:41.940592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.597 [2024-11-19 13:15:41.940601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.597 [2024-11-19 13:15:41.940608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.597 [2024-11-19 13:15:41.940616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31456 len:8 PRP1 0x0 PRP2 0x0 00:23:49.597 [2024-11-19 13:15:41.940626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.597 [2024-11-19 13:15:41.940635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.597 [2024-11-19 13:15:41.940643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.597 [2024-11-19 13:15:41.940650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31464 len:8 PRP1 0x0 PRP2 0x0 00:23:49.597 [2024-11-19 13:15:41.940659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.597 [2024-11-19 13:15:41.940668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.597 [2024-11-19 13:15:41.940675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.597 [2024-11-19 13:15:41.940683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31472 len:8 PRP1 0x0 PRP2 0x0 00:23:49.597 [2024-11-19 13:15:41.940692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.597 [2024-11-19 13:15:41.940701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.597 [2024-11-19 13:15:41.940707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.597 [2024-11-19 13:15:41.940714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31480 len:8 PRP1 0x0 PRP2 0x0 00:23:49.597 [2024-11-19 13:15:41.940723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.597 [2024-11-19 13:15:41.940733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.597 [2024-11-19 13:15:41.940740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.597 [2024-11-19 13:15:41.940747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31488 len:8 PRP1 0x0 PRP2 0x0 00:23:49.597 [2024-11-19 13:15:41.940756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.597 [2024-11-19 13:15:41.940765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.597 [2024-11-19 13:15:41.940772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.597 [2024-11-19 13:15:41.940779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31496 len:8 PRP1 0x0 PRP2 0x0 00:23:49.597 [2024-11-19 13:15:41.940789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.597 [2024-11-19 13:15:41.940801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.597 [2024-11-19 13:15:41.940808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.597 [2024-11-19 13:15:41.940815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31504 len:8 PRP1 0x0 PRP2 0x0 00:23:49.597 [2024-11-19 13:15:41.940823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.597 [2024-11-19 13:15:41.940832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.597 [2024-11-19 13:15:41.940840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.597 [2024-11-19 13:15:41.940848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31512 len:8 PRP1 0x0 PRP2 0x0 00:23:49.597 [2024-11-19 13:15:41.940856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.597 [2024-11-19 13:15:41.940866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.597 [2024-11-19 13:15:41.940872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.597 [2024-11-19 13:15:41.940880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31520 len:8 PRP1 0x0 PRP2 0x0 00:23:49.597 [2024-11-19 13:15:41.940889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.597 [2024-11-19 13:15:41.940899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.597 [2024-11-19 13:15:41.940907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.597 [2024-11-19 13:15:41.940914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31528 len:8 PRP1 0x0 PRP2 0x0 00:23:49.597 [2024-11-19 13:15:41.940924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.597 [2024-11-19 13:15:41.940933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.597 [2024-11-19 13:15:41.940940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.597 [2024-11-19 13:15:41.940952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31536 len:8 PRP1 0x0 PRP2 0x0 00:23:49.597 [2024-11-19 13:15:41.940962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.597 [2024-11-19 13:15:41.940971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.597 [2024-11-19 13:15:41.940978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.597 [2024-11-19 13:15:41.940986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31544 len:8 PRP1 0x0 PRP2 0x0 00:23:49.597 [2024-11-19 13:15:41.940995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.597 [2024-11-19 13:15:41.941004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.597 [2024-11-19 13:15:41.941010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.597 [2024-11-19 13:15:41.941018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31552 len:8 PRP1 0x0 PRP2 0x0 00:23:49.597 [2024-11-19 13:15:41.941027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.597 [2024-11-19 13:15:41.941037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.597 [2024-11-19 13:15:41.941043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.597 [2024-11-19 13:15:41.941051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31560 len:8 PRP1 0x0 PRP2 0x0 00:23:49.597 [2024-11-19 13:15:41.941062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.597 [2024-11-19 13:15:41.941071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.597 [2024-11-19 13:15:41.941078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.597 [2024-11-19 13:15:41.941086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31568 len:8 PRP1 0x0 PRP2 0x0 00:23:49.597 [2024-11-19 13:15:41.941095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.597 [2024-11-19 13:15:41.941104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.597 [2024-11-19 13:15:41.941110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.597 [2024-11-19 13:15:41.941118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31576 len:8 PRP1 0x0 PRP2 0x0 00:23:49.597 [2024-11-19 13:15:41.941126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.597 [2024-11-19 13:15:41.941135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.597 [2024-11-19 13:15:41.941143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.597 [2024-11-19 13:15:41.941150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31584 len:8 PRP1 0x0 PRP2 0x0 00:23:49.597 [2024-11-19 13:15:41.941160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.597 [2024-11-19 13:15:41.941169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.597 [2024-11-19 13:15:41.941176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.597 [2024-11-19 13:15:41.941184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31592 len:8 PRP1 0x0 PRP2 0x0 00:23:49.597 [2024-11-19 13:15:41.941194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.598 [2024-11-19 13:15:41.941203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.598 [2024-11-19 13:15:41.941210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.598 [2024-11-19 13:15:41.941218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31600 len:8 PRP1 0x0 PRP2 0x0 00:23:49.598 [2024-11-19 13:15:41.941226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.598 [2024-11-19 13:15:41.941235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.598 [2024-11-19 13:15:41.941243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.598 [2024-11-19 13:15:41.941251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31608 len:8 PRP1 0x0 PRP2 0x0 00:23:49.598 [2024-11-19 13:15:41.941260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.598 [2024-11-19 13:15:41.941268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.598 [2024-11-19 13:15:41.941275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.598 [2024-11-19 13:15:41.941282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31616 len:8 PRP1 0x0 PRP2 0x0 00:23:49.598 [2024-11-19 13:15:41.941291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.598 [2024-11-19 13:15:41.941300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.598 [2024-11-19 13:15:41.941309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.598 [2024-11-19 13:15:41.941316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31624 len:8 PRP1 0x0 PRP2 0x0 00:23:49.598 [2024-11-19 13:15:41.941325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.598 [2024-11-19 13:15:41.941334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.598 [2024-11-19 13:15:41.941340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.598 [2024-11-19 13:15:41.941348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31632 len:8 PRP1 0x0 PRP2 0x0 00:23:49.598 [2024-11-19 13:15:41.941357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.598 [2024-11-19 13:15:41.941367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.598 [2024-11-19 13:15:41.941374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.598 [2024-11-19 13:15:41.941382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31640 len:8 PRP1 0x0 PRP2 0x0 00:23:49.598 [2024-11-19 13:15:41.941391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.598 [2024-11-19 13:15:41.941400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.598 [2024-11-19 13:15:41.941407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.598 [2024-11-19 13:15:41.941415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31648 len:8 PRP1 0x0 PRP2 0x0 00:23:49.598 [2024-11-19 13:15:41.941425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.598 [2024-11-19 13:15:41.941433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.598 [2024-11-19 13:15:41.941441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.598 [2024-11-19 13:15:41.941448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31656 len:8 PRP1 0x0 PRP2 0x0 00:23:49.598 [2024-11-19 13:15:41.941457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.598 [2024-11-19 13:15:41.941466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.598 [2024-11-19 13:15:41.941474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.598 [2024-11-19 13:15:41.941481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31664 len:8 PRP1 0x0 PRP2 0x0 00:23:49.598 [2024-11-19 13:15:41.941490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.598 [2024-11-19 13:15:41.941499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.598 [2024-11-19 13:15:41.941505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.598 [2024-11-19 13:15:41.941513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31672 len:8 PRP1 0x0 PRP2 0x0 00:23:49.598 [2024-11-19 13:15:41.941522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.598 [2024-11-19 13:15:41.941533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.598 [2024-11-19 13:15:41.941541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.598 [2024-11-19 13:15:41.941548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31680 len:8 PRP1 0x0 PRP2 0x0 00:23:49.598 [2024-11-19 13:15:41.941556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.598 [2024-11-19 13:15:41.941568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.598 [2024-11-19 13:15:41.941576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.598 [2024-11-19 13:15:41.941584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31688 len:8 PRP1 0x0 PRP2 0x0 00:23:49.598 [2024-11-19 13:15:41.941593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.598 [2024-11-19 13:15:41.941602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.598 [2024-11-19 13:15:41.941609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.598 [2024-11-19 13:15:41.941617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31696 len:8 PRP1 0x0 PRP2 0x0 00:23:49.598 [2024-11-19 13:15:41.941627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.598 [2024-11-19 13:15:41.941636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.598 [2024-11-19 13:15:41.941643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.598 [2024-11-19 13:15:41.941651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31704 len:8 PRP1 0x0 PRP2 0x0 00:23:49.598 [2024-11-19 13:15:41.941660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.598 [2024-11-19 13:15:41.941669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.598 [2024-11-19 13:15:41.941676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.598 [2024-11-19 13:15:41.941683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31712 len:8 PRP1 0x0 PRP2 0x0 00:23:49.598 [2024-11-19 13:15:41.941694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.598 [2024-11-19 13:15:41.941703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.598 [2024-11-19 13:15:41.941710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.598 [2024-11-19 13:15:41.941717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31720 len:8 PRP1 0x0 PRP2 0x0 00:23:49.598 [2024-11-19 13:15:41.941726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.598 [2024-11-19 13:15:41.941736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.598 [2024-11-19 13:15:41.941742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.598 [2024-11-19 13:15:41.941750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31728 len:8 PRP1 0x0 PRP2 0x0 00:23:49.598 [2024-11-19 13:15:41.941759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.598 [2024-11-19 13:15:41.941768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.598 [2024-11-19 13:15:41.941774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.598 [2024-11-19 13:15:41.941782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31736 len:8 PRP1 0x0 PRP2 0x0 00:23:49.598 [2024-11-19 13:15:41.941791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.598 [2024-11-19 13:15:41.941800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.598 [2024-11-19 13:15:41.941807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.598 [2024-11-19 13:15:41.941815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31744 len:8 PRP1 0x0 PRP2 0x0 00:23:49.598 [2024-11-19 13:15:41.941825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.598 [2024-11-19 13:15:41.941834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.598 [2024-11-19 13:15:41.941842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.598 [2024-11-19 13:15:41.941851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31752 len:8 PRP1 0x0 PRP2 0x0 00:23:49.598 [2024-11-19 13:15:41.941860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.598 [2024-11-19 13:15:41.941869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.598 [2024-11-19 13:15:41.941875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.598 [2024-11-19 13:15:41.941883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31760 len:8 PRP1 0x0 PRP2 0x0 00:23:49.598 [2024-11-19 13:15:41.941893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.598 [2024-11-19 13:15:41.941902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.598 [2024-11-19 13:15:41.941909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.598 [2024-11-19 13:15:41.941917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31768 len:8 PRP1 0x0 PRP2 0x0 00:23:49.598 [2024-11-19 13:15:41.941925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.598 [2024-11-19 13:15:41.941934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.598 [2024-11-19 13:15:41.941942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.598 [2024-11-19 13:15:41.941954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31776 len:8 PRP1 0x0 PRP2 0x0 00:23:49.599 [2024-11-19 13:15:41.941965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.599 [2024-11-19 13:15:41.941974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.599 [2024-11-19 13:15:41.941981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.599 [2024-11-19 13:15:41.941988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31784 len:8 PRP1 0x0 PRP2 0x0 00:23:49.599 [2024-11-19 13:15:41.941997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.599 [2024-11-19 13:15:41.942006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.599 [2024-11-19 13:15:41.942013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.599 [2024-11-19 13:15:41.942021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31792 len:8 PRP1 0x0 PRP2 0x0 00:23:49.599 [2024-11-19 13:15:41.942030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.599 [2024-11-19 13:15:41.942038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.599 [2024-11-19 13:15:41.942045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.599 [2024-11-19 13:15:41.942052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31800 len:8 PRP1 0x0 PRP2 0x0 00:23:49.599 [2024-11-19 13:15:41.942060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.599 [2024-11-19 13:15:41.942070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.599 [2024-11-19 13:15:41.942077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.599 [2024-11-19 13:15:41.942086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31808 len:8 PRP1 0x0 PRP2 0x0 00:23:49.599 [2024-11-19 13:15:41.942095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.599 [2024-11-19 13:15:41.942104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.599 [2024-11-19 13:15:41.942110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.599 [2024-11-19 13:15:41.942120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31816 len:8 PRP1 0x0 PRP2 0x0 00:23:49.599 [2024-11-19 13:15:41.942129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.599 [2024-11-19 13:15:41.942138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.599 [2024-11-19 13:15:41.942145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.599 [2024-11-19 13:15:41.942152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31824 len:8 PRP1 0x0 PRP2 0x0 00:23:49.599 [2024-11-19 13:15:41.942160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.599 [2024-11-19 13:15:41.942169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.599 [2024-11-19 13:15:41.942177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.599 [2024-11-19 13:15:41.942184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31832 len:8 PRP1 0x0 PRP2 0x0 00:23:49.599 [2024-11-19 13:15:41.942193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.599 [2024-11-19 13:15:41.942201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.599 [2024-11-19 13:15:41.942208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.599 [2024-11-19 13:15:41.942215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31840 len:8 PRP1 0x0 PRP2 0x0 00:23:49.599 [2024-11-19 13:15:41.942226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.599 [2024-11-19 13:15:41.942236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.599 [2024-11-19 13:15:41.942243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.599 [2024-11-19 13:15:41.942251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31848 len:8 PRP1 0x0 PRP2 0x0 00:23:49.599 [2024-11-19 13:15:41.949514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.599 [2024-11-19 13:15:41.949533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.599 [2024-11-19 13:15:41.949543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.599 [2024-11-19 13:15:41.949553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31856 len:8 PRP1 0x0 PRP2 0x0 00:23:49.599 [2024-11-19 13:15:41.949565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.599 [2024-11-19 13:15:41.949577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.599 [2024-11-19 13:15:41.949586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.599 [2024-11-19 13:15:41.949596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31864 len:8 PRP1 0x0 PRP2 0x0 00:23:49.599 [2024-11-19 13:15:41.949608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.599 [2024-11-19 13:15:41.949619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.599 [2024-11-19 13:15:41.949631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.599 [2024-11-19 13:15:41.949642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31872 len:8 PRP1 0x0 PRP2 0x0 00:23:49.599 [2024-11-19 13:15:41.949654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.599 [2024-11-19 13:15:41.949666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.599 [2024-11-19 13:15:41.949676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.599 [2024-11-19 13:15:41.949688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31880 len:8 PRP1 0x0 PRP2 0x0 00:23:49.599 [2024-11-19 13:15:41.949700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.599 [2024-11-19 13:15:41.949713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.599 [2024-11-19 13:15:41.949722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.599 [2024-11-19 13:15:41.949732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31888 len:8 PRP1 0x0 PRP2 0x0 00:23:49.599 [2024-11-19 13:15:41.949744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.599 [2024-11-19 13:15:41.949757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.599 [2024-11-19 13:15:41.949766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.599 [2024-11-19 13:15:41.949776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31896 len:8 PRP1 0x0 PRP2 0x0 00:23:49.599 [2024-11-19 13:15:41.949788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.599 [2024-11-19 13:15:41.949800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.599 [2024-11-19 13:15:41.949808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.599 [2024-11-19 13:15:41.949819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31904 len:8 PRP1 0x0 PRP2 0x0 00:23:49.599 [2024-11-19 13:15:41.949832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.599 [2024-11-19 13:15:41.949845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.599 [2024-11-19 13:15:41.949854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.599 [2024-11-19 13:15:41.949865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31912 len:8 PRP1 0x0 PRP2 0x0 00:23:49.599 [2024-11-19 13:15:41.949876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.599 [2024-11-19 13:15:41.949889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.599 [2024-11-19 13:15:41.949898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.599 [2024-11-19 13:15:41.949907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31920 len:8 PRP1 0x0 PRP2 0x0 00:23:49.599 [2024-11-19 13:15:41.949920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.599 [2024-11-19 13:15:41.949932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.599 [2024-11-19 13:15:41.949942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.599 [2024-11-19 13:15:41.949957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31928 len:8 PRP1 0x0 PRP2 0x0 00:23:49.599 [2024-11-19 13:15:41.949969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.599 [2024-11-19 13:15:41.949984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.599 [2024-11-19 13:15:41.949993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.599 [2024-11-19 13:15:41.950004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31936 len:8 PRP1 0x0 PRP2 0x0 00:23:49.599 [2024-11-19 13:15:41.950016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.599 [2024-11-19 13:15:41.950028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.599 [2024-11-19 13:15:41.950037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.599 [2024-11-19 13:15:41.950048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31944 len:8 PRP1 0x0 PRP2 0x0 00:23:49.599 [2024-11-19 13:15:41.950060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.599 [2024-11-19 13:15:41.950073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.599 [2024-11-19 13:15:41.950082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.599 [2024-11-19 13:15:41.950092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31952 len:8 PRP1 0x0 PRP2 0x0 00:23:49.599 [2024-11-19 13:15:41.950104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.599 [2024-11-19 13:15:41.950117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.599 [2024-11-19 13:15:41.950125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.599 [2024-11-19 13:15:41.950136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31960 len:8 PRP1 0x0 PRP2 0x0 00:23:49.599 [2024-11-19 13:15:41.950147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.600 [2024-11-19 13:15:41.950159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.600 [2024-11-19 13:15:41.950169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.600 [2024-11-19 13:15:41.950180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31968 len:8 PRP1 0x0 PRP2 0x0 00:23:49.600 [2024-11-19 13:15:41.950192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.600 [2024-11-19 13:15:41.950205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.600 [2024-11-19 13:15:41.950214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.600 [2024-11-19 13:15:41.950225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31976 len:8 PRP1 0x0 PRP2 0x0 00:23:49.600 [2024-11-19 13:15:41.950237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.600 [2024-11-19 13:15:41.950249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.600 [2024-11-19 13:15:41.950258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.600 [2024-11-19 13:15:41.950268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31984 len:8 PRP1 0x0 PRP2 0x0 00:23:49.600 [2024-11-19 13:15:41.950281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.600 [2024-11-19 13:15:41.950293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.600 [2024-11-19 13:15:41.950302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.600 [2024-11-19 13:15:41.950312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31992 len:8 PRP1 0x0 PRP2 0x0 00:23:49.600 [2024-11-19 13:15:41.950326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.600 [2024-11-19 13:15:41.950338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.600 [2024-11-19 13:15:41.950348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.600 [2024-11-19 13:15:41.950358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32000 len:8 PRP1 0x0 PRP2 0x0 00:23:49.600 [2024-11-19 13:15:41.950369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.600 [2024-11-19 13:15:41.950382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.600 [2024-11-19 13:15:41.950391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.600 [2024-11-19 13:15:41.950401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32008 len:8 PRP1 0x0 PRP2 0x0 00:23:49.600 [2024-11-19 13:15:41.950413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.600 [2024-11-19 13:15:41.950425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.600 [2024-11-19 13:15:41.950434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.600 [2024-11-19 13:15:41.950445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32016 len:8 PRP1 0x0 PRP2 0x0 00:23:49.600 [2024-11-19 13:15:41.950458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.600 [2024-11-19 13:15:41.950470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.600 [2024-11-19 13:15:41.950480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.600 [2024-11-19 13:15:41.950490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32024 len:8 PRP1 0x0 PRP2 0x0 00:23:49.600 [2024-11-19 13:15:41.950502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.600 [2024-11-19 13:15:41.950515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.600 [2024-11-19 13:15:41.950524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.600 [2024-11-19 13:15:41.950534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32032 len:8 PRP1 0x0 PRP2 0x0 00:23:49.600 [2024-11-19 13:15:41.950548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.600 [2024-11-19 13:15:41.950561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.600 [2024-11-19 13:15:41.950570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.600 [2024-11-19 13:15:41.950580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32040 len:8 PRP1 0x0 PRP2 0x0 00:23:49.600 [2024-11-19 13:15:41.950593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.600 [2024-11-19 13:15:41.950605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.600 [2024-11-19 13:15:41.950613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.600 [2024-11-19 13:15:41.950624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32048 len:8 PRP1 0x0 PRP2 0x0 00:23:49.600 [2024-11-19 13:15:41.950636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.600 [2024-11-19 13:15:41.950647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.600 [2024-11-19 13:15:41.950659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.600 [2024-11-19 13:15:41.950669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32056 len:8 PRP1 0x0 PRP2 0x0 00:23:49.600 [2024-11-19 13:15:41.950681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.600 [2024-11-19 13:15:41.950694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.600 [2024-11-19 13:15:41.950703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.600 [2024-11-19 13:15:41.950712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32064 len:8 PRP1 0x0 PRP2 0x0 00:23:49.600 [2024-11-19 13:15:41.950724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.600 [2024-11-19 13:15:41.950736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.600 [2024-11-19 13:15:41.950745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.600 [2024-11-19 13:15:41.950756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32072 len:8 PRP1 0x0 PRP2 0x0 00:23:49.600 [2024-11-19 13:15:41.950768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.600 [2024-11-19 13:15:41.950780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.600 [2024-11-19 13:15:41.950790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.600 [2024-11-19 13:15:41.950800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32080 len:8 PRP1 0x0 PRP2 0x0 00:23:49.600 [2024-11-19 13:15:41.950812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.600 [2024-11-19 13:15:41.950824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.600 [2024-11-19 13:15:41.950833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.600 [2024-11-19 13:15:41.950843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32088 len:8 PRP1 0x0 PRP2 0x0 00:23:49.600 [2024-11-19 13:15:41.950854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.600 [2024-11-19 13:15:41.950867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.600 [2024-11-19 13:15:41.950876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.600 [2024-11-19 13:15:41.950886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32096 len:8 PRP1 0x0 PRP2 0x0 00:23:49.600 [2024-11-19 13:15:41.950899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.600 [2024-11-19 13:15:41.950911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.600 [2024-11-19 13:15:41.950921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.600 [2024-11-19 13:15:41.950931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32104 len:8 PRP1 0x0 PRP2 0x0 00:23:49.600 [2024-11-19 13:15:41.950943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.600 [2024-11-19 13:15:41.950964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.600 [2024-11-19 13:15:41.950974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.600 [2024-11-19 13:15:41.950984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32112 len:8 PRP1 0x0 PRP2 0x0 00:23:49.600 [2024-11-19 13:15:41.950996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.600 [2024-11-19 13:15:41.951011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.600 [2024-11-19 13:15:41.951020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.600 [2024-11-19 13:15:41.951030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32120 len:8 PRP1 0x0 PRP2 0x0 00:23:49.600 [2024-11-19 13:15:41.951042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.600 [2024-11-19 13:15:41.951055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.600 [2024-11-19 13:15:41.951063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.600 [2024-11-19 13:15:41.951074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32128 len:8 PRP1 0x0 PRP2 0x0 00:23:49.600 [2024-11-19 13:15:41.951085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.600 [2024-11-19 13:15:41.951097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.600 [2024-11-19 13:15:41.951107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.600 [2024-11-19 13:15:41.951117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32136 len:8 PRP1 0x0 PRP2 0x0 00:23:49.600 [2024-11-19 13:15:41.951128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.600 [2024-11-19 13:15:41.951141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.600 [2024-11-19 13:15:41.951151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.600 [2024-11-19 13:15:41.951160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32144 len:8 PRP1 0x0 PRP2 0x0 00:23:49.600 [2024-11-19 13:15:41.951173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.600 [2024-11-19 13:15:41.951186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.601 [2024-11-19 13:15:41.951195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.601 [2024-11-19 13:15:41.951204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32152 len:8 PRP1 0x0 PRP2 0x0 00:23:49.601 [2024-11-19 13:15:41.951216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:41.951229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.601 [2024-11-19 13:15:41.951237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.601 [2024-11-19 13:15:41.951248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32160 len:8 PRP1 0x0 PRP2 0x0 00:23:49.601 [2024-11-19 13:15:41.951260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:41.951273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.601 [2024-11-19 13:15:41.951283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.601 [2024-11-19 13:15:41.951293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32168 len:8 PRP1 0x0 PRP2 0x0 00:23:49.601 [2024-11-19 13:15:41.951305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:41.951317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.601 [2024-11-19 13:15:41.951326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.601 [2024-11-19 13:15:41.951337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32176 len:8 PRP1 0x0 PRP2 0x0 00:23:49.601 [2024-11-19 13:15:41.951351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:41.951364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.601 [2024-11-19 13:15:41.951374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.601 [2024-11-19 13:15:41.951383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32184 len:8 PRP1 0x0 PRP2 0x0 00:23:49.601 [2024-11-19 13:15:41.951395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:41.951407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.601 [2024-11-19 13:15:41.951416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.601 [2024-11-19 13:15:41.951426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32192 len:8 PRP1 0x0 PRP2 0x0 00:23:49.601 [2024-11-19 13:15:41.951438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:41.951451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.601 [2024-11-19 13:15:41.951461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.601 [2024-11-19 13:15:41.951471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32200 len:8 PRP1 0x0 PRP2 0x0 00:23:49.601 [2024-11-19 13:15:41.951483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:41.951495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.601 [2024-11-19 13:15:41.951505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.601 [2024-11-19 13:15:41.951515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32208 len:8 PRP1 0x0 PRP2 0x0 00:23:49.601 [2024-11-19 13:15:41.951528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:41.951540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.601 [2024-11-19 13:15:41.951550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.601 [2024-11-19 13:15:41.951559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32216 len:8 PRP1 0x0 PRP2 0x0 00:23:49.601 [2024-11-19 13:15:41.951571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:41.951584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.601 [2024-11-19 13:15:41.951592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.601 [2024-11-19 13:15:41.951603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32224 len:8 PRP1 0x0 PRP2 0x0 00:23:49.601 [2024-11-19 13:15:41.951616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:41.951628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.601 [2024-11-19 13:15:41.951638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.601 [2024-11-19 13:15:41.951647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32232 len:8 PRP1 0x0 PRP2 0x0 00:23:49.601 [2024-11-19 13:15:41.951659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:41.951672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.601 [2024-11-19 13:15:41.951682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.601 [2024-11-19 13:15:41.951693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32240 len:8 PRP1 0x0 PRP2 0x0 00:23:49.601 [2024-11-19 13:15:41.951706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:41.951718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.601 [2024-11-19 13:15:41.951727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.601 [2024-11-19 13:15:41.951738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32248 len:8 PRP1 0x0 PRP2 0x0 00:23:49.601 [2024-11-19 13:15:41.951749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:41.951762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.601 [2024-11-19 13:15:41.951771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.601 [2024-11-19 13:15:41.951781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32256 len:8 PRP1 0x0 PRP2 0x0 00:23:49.601 [2024-11-19 13:15:41.951792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:41.951848] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:49.601 [2024-11-19 13:15:41.951863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:49.601 [2024-11-19 13:15:41.951914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x72d340 (9): Bad file descriptor 00:23:49.601 [2024-11-19 13:15:41.957086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:49.601 [2024-11-19 13:15:41.984250] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:49.601 10888.00 IOPS, 42.53 MiB/s [2024-11-19T12:15:52.978Z] 10937.83 IOPS, 42.73 MiB/s [2024-11-19T12:15:52.978Z] 10964.71 IOPS, 42.83 MiB/s [2024-11-19T12:15:52.978Z] 10977.62 IOPS, 42.88 MiB/s [2024-11-19T12:15:52.978Z] 11007.67 IOPS, 43.00 MiB/s [2024-11-19T12:15:52.978Z] [2024-11-19 13:15:46.349667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.601 [2024-11-19 13:15:46.349709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:46.349719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.601 [2024-11-19 13:15:46.349727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:46.349734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.601 [2024-11-19 13:15:46.349741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:46.349749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.601 [2024-11-19 13:15:46.349755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:46.349762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72d340 is same with the state(6) to be set 00:23:49.601 [2024-11-19 13:15:46.350897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.601 [2024-11-19 13:15:46.350918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:46.350932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.601 [2024-11-19 13:15:46.350945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:46.350962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.601 [2024-11-19 13:15:46.350969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:46.350977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.601 [2024-11-19 13:15:46.350984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:46.350993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:35960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.601 [2024-11-19 13:15:46.351000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:46.351009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.601 [2024-11-19 13:15:46.351016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:46.351024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.601 [2024-11-19 13:15:46.351031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:46.351039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.601 [2024-11-19 13:15:46.351047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.601 [2024-11-19 13:15:46.351055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:36000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:36048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:36080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:36168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:36192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.602 [2024-11-19 13:15:46.351590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.602 [2024-11-19 13:15:46.351598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.603 [2024-11-19 13:15:46.351604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.603 [2024-11-19 13:15:46.351621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.603 [2024-11-19 13:15:46.351636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.603 [2024-11-19 13:15:46.351651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:36304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.603 [2024-11-19 13:15:46.351665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.603 [2024-11-19 13:15:46.351681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.603 [2024-11-19 13:15:46.351697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.603 [2024-11-19 13:15:46.351712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.603 [2024-11-19 13:15:46.351729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:36344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.603 [2024-11-19 13:15:46.351745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.603 [2024-11-19 13:15:46.351760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.603 [2024-11-19 13:15:46.351774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.603 [2024-11-19 13:15:46.351790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.603 [2024-11-19 13:15:46.351804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.603 [2024-11-19 13:15:46.351819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:36392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.603 [2024-11-19 13:15:46.351834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:36424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.603 [2024-11-19 13:15:46.351850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:36432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.603 [2024-11-19 13:15:46.351865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.603 [2024-11-19 13:15:46.351879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.603 [2024-11-19 13:15:46.351895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:36456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.603 [2024-11-19 13:15:46.351909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.603 [2024-11-19 13:15:46.351926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:36472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.603 [2024-11-19 13:15:46.351941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.603 [2024-11-19 13:15:46.351962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:36488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.603 [2024-11-19 13:15:46.351978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.351986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:36496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.603 [2024-11-19 13:15:46.351993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.352001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.603 [2024-11-19 13:15:46.352007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.352015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.603 [2024-11-19 13:15:46.352023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.352032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.603 [2024-11-19 13:15:46.352038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.352047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:36528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.603 [2024-11-19 13:15:46.352053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.352061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.603 [2024-11-19 13:15:46.352068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.352076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:36544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.603 [2024-11-19 13:15:46.352083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.352091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.603 [2024-11-19 13:15:46.352098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.352106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.603 [2024-11-19 13:15:46.352113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.352123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.603 [2024-11-19 13:15:46.352130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.352139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:36576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.603 [2024-11-19 13:15:46.352145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.352154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:36584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.603 [2024-11-19 13:15:46.352160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.352168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:36592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.603 [2024-11-19 13:15:46.352177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.352185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.603 [2024-11-19 13:15:46.352193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.603 [2024-11-19 13:15:46.352201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:36608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.603 [2024-11-19 13:15:46.352207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:36632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:36640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:36656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:36672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:36704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:36712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:36736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:36776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:36792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:36808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:36816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:36864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:36880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:36904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.604 [2024-11-19 13:15:46.352815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.604 [2024-11-19 13:15:46.352822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.605 [2024-11-19 13:15:46.352830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.605 [2024-11-19 13:15:46.352837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.605 [2024-11-19 13:15:46.352866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.605 [2024-11-19 13:15:46.352873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36944 len:8 PRP1 0x0 PRP2 0x0 00:23:49.605 [2024-11-19 13:15:46.352880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.605 [2024-11-19 13:15:46.352890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.605 [2024-11-19 13:15:46.352896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.605 [2024-11-19 13:15:46.352902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36400 len:8 PRP1 0x0 PRP2 0x0 00:23:49.605 [2024-11-19 13:15:46.352909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.605 [2024-11-19 13:15:46.352922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.605 [2024-11-19 13:15:46.352928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.605 [2024-11-19 13:15:46.352934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36408 len:8 PRP1 0x0 PRP2 0x0 00:23:49.605 [2024-11-19 13:15:46.352941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.605 [2024-11-19 13:15:46.352954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:49.605 [2024-11-19 13:15:46.352960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:49.605 [2024-11-19 13:15:46.352966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36416 len:8 PRP1 0x0 PRP2 0x0 00:23:49.605 [2024-11-19 13:15:46.352972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.605 [2024-11-19 13:15:46.353015] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:49.605 [2024-11-19 13:15:46.353025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:49.605 [2024-11-19 13:15:46.355862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:49.605 [2024-11-19 13:15:46.355892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x72d340 (9): Bad file descriptor 00:23:49.605 [2024-11-19 13:15:46.382895] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:49.605 10986.10 IOPS, 42.91 MiB/s [2024-11-19T12:15:52.982Z] 10997.00 IOPS, 42.96 MiB/s [2024-11-19T12:15:52.982Z] 10999.50 IOPS, 42.97 MiB/s [2024-11-19T12:15:52.982Z] 11024.23 IOPS, 43.06 MiB/s [2024-11-19T12:15:52.982Z] 11028.36 IOPS, 43.08 MiB/s [2024-11-19T12:15:52.982Z] 11034.33 IOPS, 43.10 MiB/s 00:23:49.605 Latency(us) 00:23:49.605 [2024-11-19T12:15:52.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.605 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:49.605 Verification LBA range: start 0x0 length 0x4000 00:23:49.605 NVMe0n1 : 15.01 11034.55 43.10 254.02 0.00 11316.58 425.63 32824.99 00:23:49.605 [2024-11-19T12:15:52.982Z] =================================================================================================================== 00:23:49.605 [2024-11-19T12:15:52.982Z] Total : 11034.55 43.10 254.02 0.00 11316.58 425.63 32824.99 00:23:49.605 Received shutdown signal, test time was about 15.000000 seconds 00:23:49.605 00:23:49.605 Latency(us) 00:23:49.605 [2024-11-19T12:15:52.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.605 [2024-11-19T12:15:52.982Z] =================================================================================================================== 00:23:49.605 [2024-11-19T12:15:52.982Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:49.605 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:49.605 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:49.605 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:49.605 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2943766 00:23:49.605 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2943766 /var/tmp/bdevperf.sock 00:23:49.605 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:49.605 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2943766 ']' 00:23:49.605 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:49.605 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:49.605 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:49.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:49.605 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:49.605 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:49.605 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:49.605 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:49.605 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:49.605 [2024-11-19 13:15:52.927631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:49.863 13:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:49.863 [2024-11-19 13:15:53.132232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:49.863 13:15:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:50.428 NVMe0n1 00:23:50.428 13:15:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:50.686 00:23:50.686 13:15:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:50.944 00:23:50.944 13:15:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:50.944 13:15:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:51.202 13:15:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:51.460 13:15:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:54.747 13:15:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:54.747 13:15:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:54.747 13:15:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:54.747 13:15:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2944615 00:23:54.747 13:15:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2944615 00:23:55.682 { 00:23:55.682 "results": [ 00:23:55.682 { 00:23:55.682 "job": "NVMe0n1", 00:23:55.682 "core_mask": "0x1", 00:23:55.682 "workload": "verify", 00:23:55.682 "status": "finished", 00:23:55.682 "verify_range": { 00:23:55.682 "start": 0, 00:23:55.682 "length": 16384 00:23:55.682 }, 00:23:55.682 "queue_depth": 128, 00:23:55.682 "io_size": 4096, 00:23:55.682 "runtime": 1.047139, 00:23:55.682 "iops": 10383.530744246944, 00:23:55.682 "mibps": 40.560666969714624, 00:23:55.682 "io_failed": 0, 00:23:55.682 "io_timeout": 0, 00:23:55.682 "avg_latency_us": 11809.513613218223, 00:23:55.682 "min_latency_us": 2436.229565217391, 00:23:55.682 "max_latency_us": 43766.65043478261 00:23:55.682 } 00:23:55.682 ], 00:23:55.682 "core_count": 1 00:23:55.682 } 00:23:55.683 13:15:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:55.683 [2024-11-19 13:15:52.540064] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:23:55.683 [2024-11-19 13:15:52.540118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2943766 ] 00:23:55.683 [2024-11-19 13:15:52.617070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.683 [2024-11-19 13:15:52.654957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.683 [2024-11-19 13:15:54.628583] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:55.683 [2024-11-19 13:15:54.628625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.683 [2024-11-19 13:15:54.628636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.683 [2024-11-19 13:15:54.628645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.683 [2024-11-19 13:15:54.628652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.683 [2024-11-19 13:15:54.628660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.683 [2024-11-19 13:15:54.628667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.683 [2024-11-19 13:15:54.628675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.683 [2024-11-19 13:15:54.628682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.683 [2024-11-19 13:15:54.628688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:55.683 [2024-11-19 13:15:54.628712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:55.683 [2024-11-19 13:15:54.628727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1200340 (9): Bad file descriptor 00:23:55.683 [2024-11-19 13:15:54.639482] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:55.683 Running I/O for 1 seconds... 00:23:55.683 10745.00 IOPS, 41.97 MiB/s 00:23:55.683 Latency(us) 00:23:55.683 [2024-11-19T12:15:59.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.683 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:55.683 Verification LBA range: start 0x0 length 0x4000 00:23:55.683 NVMe0n1 : 1.05 10383.53 40.56 0.00 0.00 11809.51 2436.23 43766.65 00:23:55.683 [2024-11-19T12:15:59.060Z] =================================================================================================================== 00:23:55.683 [2024-11-19T12:15:59.060Z] Total : 10383.53 40.56 0.00 0.00 11809.51 2436.23 43766.65 00:23:55.683 13:15:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:55.683 13:15:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:55.941 13:15:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:56.198 13:15:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:56.198 13:15:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:56.455 13:15:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:56.713 13:15:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:59.998 13:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:59.998 13:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:59.998 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2943766 00:23:59.998 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2943766 ']' 00:23:59.998 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2943766 00:23:59.998 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:59.998 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.998 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2943766 00:23:59.998 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:59.998 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:59.998 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2943766' 00:23:59.998 killing process with pid 2943766 00:23:59.998 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2943766 00:23:59.998 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2943766 00:23:59.998 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:59.998 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:00.257 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:00.257 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:00.257 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:00.257 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:00.257 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:00.257 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:00.257 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:00.257 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:00.257 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:00.257 rmmod nvme_tcp 00:24:00.257 rmmod nvme_fabrics 00:24:00.257 rmmod nvme_keyring 00:24:00.257 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:00.257 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:00.257 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:00.257 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2940893 ']' 00:24:00.257 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2940893 00:24:00.257 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2940893 ']' 00:24:00.257 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2940893 00:24:00.257 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:00.257 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.257 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2940893 00:24:00.257 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:00.257 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:00.257 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2940893' 00:24:00.257 killing process with pid 2940893 00:24:00.257 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2940893 00:24:00.257 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2940893 00:24:00.516 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:00.516 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:00.516 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:00.516 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:00.516 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:00.516 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:00.516 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:00.516 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:00.516 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:00.516 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.516 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.516 13:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.052 13:16:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:03.052 00:24:03.052 real 0m37.574s 00:24:03.052 user 1m59.004s 00:24:03.052 sys 0m7.993s 00:24:03.052 13:16:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:03.052 13:16:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:03.052 ************************************ 00:24:03.052 END TEST nvmf_failover 00:24:03.052 ************************************ 00:24:03.052 13:16:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:03.052 13:16:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:03.052 13:16:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:03.052 13:16:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.052 ************************************ 00:24:03.052 START TEST nvmf_host_discovery 00:24:03.052 ************************************ 00:24:03.052 13:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:03.052 * Looking for test storage... 00:24:03.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:03.052 13:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:03.052 13:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:24:03.052 13:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:03.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.052 --rc genhtml_branch_coverage=1 00:24:03.052 --rc genhtml_function_coverage=1 00:24:03.052 --rc genhtml_legend=1 00:24:03.052 --rc geninfo_all_blocks=1 00:24:03.052 --rc geninfo_unexecuted_blocks=1 00:24:03.052 00:24:03.052 ' 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:03.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.052 --rc genhtml_branch_coverage=1 00:24:03.052 --rc genhtml_function_coverage=1 00:24:03.052 --rc genhtml_legend=1 00:24:03.052 --rc geninfo_all_blocks=1 00:24:03.052 --rc geninfo_unexecuted_blocks=1 00:24:03.052 00:24:03.052 ' 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:03.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.052 --rc genhtml_branch_coverage=1 00:24:03.052 --rc genhtml_function_coverage=1 00:24:03.052 --rc genhtml_legend=1 00:24:03.052 --rc geninfo_all_blocks=1 00:24:03.052 --rc geninfo_unexecuted_blocks=1 00:24:03.052 00:24:03.052 ' 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:03.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.052 --rc genhtml_branch_coverage=1 00:24:03.052 --rc genhtml_function_coverage=1 00:24:03.052 --rc genhtml_legend=1 00:24:03.052 --rc geninfo_all_blocks=1 00:24:03.052 --rc geninfo_unexecuted_blocks=1 00:24:03.052 00:24:03.052 ' 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:03.052 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:03.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:03.053 13:16:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:09.622 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:09.622 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:09.622 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:09.623 Found net devices under 0000:86:00.0: cvl_0_0 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:09.623 Found net devices under 0000:86:00.1: cvl_0_1 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:09.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:24:09.623 00:24:09.623 --- 10.0.0.2 ping statistics --- 00:24:09.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.623 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:09.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:24:09.623 00:24:09.623 --- 10.0.0.1 ping statistics --- 00:24:09.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.623 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:09.623 13:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2949064 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2949064 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2949064 ']' 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.623 [2024-11-19 13:16:12.087861] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:24:09.623 [2024-11-19 13:16:12.087905] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.623 [2024-11-19 13:16:12.168348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.623 [2024-11-19 13:16:12.208978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.623 [2024-11-19 13:16:12.209013] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.623 [2024-11-19 13:16:12.209020] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.623 [2024-11-19 13:16:12.209026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.623 [2024-11-19 13:16:12.209031] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.623 [2024-11-19 13:16:12.209593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.623 [2024-11-19 13:16:12.348443] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.623 [2024-11-19 13:16:12.360628] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.623 null0 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.623 null1 00:24:09.623 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2949089 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2949089 /tmp/host.sock 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2949089 ']' 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:09.624 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.624 [2024-11-19 13:16:12.440206] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:24:09.624 [2024-11-19 13:16:12.440253] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2949089 ] 00:24:09.624 [2024-11-19 13:16:12.514119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.624 [2024-11-19 13:16:12.557152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.624 [2024-11-19 13:16:12.978194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:09.624 13:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:24:09.884 13:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:10.451 [2024-11-19 13:16:13.717430] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:10.451 [2024-11-19 13:16:13.717448] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:10.451 [2024-11-19 13:16:13.717460] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:10.451 [2024-11-19 13:16:13.804716] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:10.710 [2024-11-19 13:16:13.945618] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:10.710 [2024-11-19 13:16:13.946315] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1ef6dd0:1 started. 00:24:10.710 [2024-11-19 13:16:13.947677] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:10.710 [2024-11-19 13:16:13.947692] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:10.710 [2024-11-19 13:16:13.995552] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1ef6dd0 was disconnected and freed. delete nvme_qpair. 00:24:10.969 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:10.969 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:10.969 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:10.969 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:10.969 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:10.969 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.969 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:10.969 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.969 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:10.969 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.969 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.969 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:10.969 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:10.969 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:10.969 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:10.970 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.229 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:11.229 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:11.229 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:11.229 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:11.229 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:11.229 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.229 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.229 [2024-11-19 13:16:14.358062] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1ef71a0:1 started. 00:24:11.229 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.229 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:11.229 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:11.229 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:11.229 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:11.229 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:11.229 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:11.229 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.229 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:11.229 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.229 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:11.229 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:11.230 [2024-11-19 13:16:14.366253] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1ef71a0 was disconnected and freed. delete nvme_qpair. 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.230 [2024-11-19 13:16:14.454155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:11.230 [2024-11-19 13:16:14.454803] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:11.230 [2024-11-19 13:16:14.454823] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:11.230 [2024-11-19 13:16:14.543084] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:11.230 13:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:11.490 [2024-11-19 13:16:14.805415] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:24:11.490 [2024-11-19 13:16:14.805451] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:11.490 [2024-11-19 13:16:14.805459] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:11.490 [2024-11-19 13:16:14.805463] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.428 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.428 [2024-11-19 13:16:15.689773] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:12.428 [2024-11-19 13:16:15.689794] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:12.428 [2024-11-19 13:16:15.690036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.428 [2024-11-19 13:16:15.690053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.428 [2024-11-19 13:16:15.690062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.428 [2024-11-19 13:16:15.690073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.428 [2024-11-19 13:16:15.690081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.428 [2024-11-19 13:16:15.690087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.428 [2024-11-19 13:16:15.690094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.428 [2024-11-19 13:16:15.690101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.428 [2024-11-19 13:16:15.690107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec7390 is same with the state(6) to be set 00:24:12.429 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.429 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:12.429 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:12.429 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.429 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.429 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:12.429 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:12.429 [2024-11-19 13:16:15.700046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec7390 (9): Bad file descriptor 00:24:12.429 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:12.429 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:12.429 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.429 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:12.429 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.429 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:12.429 [2024-11-19 13:16:15.710081] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:12.429 [2024-11-19 13:16:15.710095] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:12.429 [2024-11-19 13:16:15.710100] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:12.429 [2024-11-19 13:16:15.710105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:12.429 [2024-11-19 13:16:15.710126] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:12.429 [2024-11-19 13:16:15.710399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.429 [2024-11-19 13:16:15.710415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec7390 with addr=10.0.0.2, port=4420 00:24:12.429 [2024-11-19 13:16:15.710423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec7390 is same with the state(6) to be set 00:24:12.429 [2024-11-19 13:16:15.710436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec7390 (9): Bad file descriptor 00:24:12.429 [2024-11-19 13:16:15.710446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:12.429 [2024-11-19 13:16:15.710453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:12.429 [2024-11-19 13:16:15.710465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:12.429 [2024-11-19 13:16:15.710471] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:12.429 [2024-11-19 13:16:15.710476] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:12.429 [2024-11-19 13:16:15.710480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:12.429 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.429 [2024-11-19 13:16:15.720157] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:12.429 [2024-11-19 13:16:15.720169] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:12.429 [2024-11-19 13:16:15.720174] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:12.429 [2024-11-19 13:16:15.720179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:12.429 [2024-11-19 13:16:15.720194] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:12.429 [2024-11-19 13:16:15.720327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.429 [2024-11-19 13:16:15.720339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec7390 with addr=10.0.0.2, port=4420 00:24:12.429 [2024-11-19 13:16:15.720346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec7390 is same with the state(6) to be set 00:24:12.429 [2024-11-19 13:16:15.720357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec7390 (9): Bad file descriptor 00:24:12.429 [2024-11-19 13:16:15.720366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:12.429 [2024-11-19 13:16:15.720373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:12.429 [2024-11-19 13:16:15.720380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:12.429 [2024-11-19 13:16:15.720386] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:12.429 [2024-11-19 13:16:15.720390] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:12.429 [2024-11-19 13:16:15.720394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:12.429 [2024-11-19 13:16:15.730225] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:12.429 [2024-11-19 13:16:15.730238] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:12.429 [2024-11-19 13:16:15.730242] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:12.429 [2024-11-19 13:16:15.730246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:12.429 [2024-11-19 13:16:15.730261] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:12.429 [2024-11-19 13:16:15.730433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.429 [2024-11-19 13:16:15.730446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec7390 with addr=10.0.0.2, port=4420 00:24:12.429 [2024-11-19 13:16:15.730453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec7390 is same with the state(6) to be set 00:24:12.429 [2024-11-19 13:16:15.730464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec7390 (9): Bad file descriptor 00:24:12.429 [2024-11-19 13:16:15.730474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:12.429 [2024-11-19 13:16:15.730484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:12.429 [2024-11-19 13:16:15.730491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:12.429 [2024-11-19 13:16:15.730497] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:12.429 [2024-11-19 13:16:15.730502] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:12.429 [2024-11-19 13:16:15.730505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:12.429 [2024-11-19 13:16:15.740293] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:12.429 [2024-11-19 13:16:15.740307] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:12.429 [2024-11-19 13:16:15.740311] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:12.429 [2024-11-19 13:16:15.740315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:12.429 [2024-11-19 13:16:15.740330] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:12.429 [2024-11-19 13:16:15.740515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.429 [2024-11-19 13:16:15.740529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec7390 with addr=10.0.0.2, port=4420 00:24:12.429 [2024-11-19 13:16:15.740537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec7390 is same with the state(6) to be set 00:24:12.429 [2024-11-19 13:16:15.740548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec7390 (9): Bad file descriptor 00:24:12.429 [2024-11-19 13:16:15.740559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:12.429 [2024-11-19 13:16:15.740565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:12.429 [2024-11-19 13:16:15.740572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:12.429 [2024-11-19 13:16:15.740578] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:12.429 [2024-11-19 13:16:15.740583] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:12.429 [2024-11-19 13:16:15.740586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:12.429 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.429 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.429 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:12.429 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:12.429 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.429 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.429 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:12.429 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:12.429 [2024-11-19 13:16:15.750361] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:12.429 [2024-11-19 13:16:15.750372] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:12.429 [2024-11-19 13:16:15.750380] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:12.429 [2024-11-19 13:16:15.750384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:12.429 [2024-11-19 13:16:15.750398] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:12.429 [2024-11-19 13:16:15.750497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.429 [2024-11-19 13:16:15.750509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec7390 with addr=10.0.0.2, port=4420 00:24:12.429 [2024-11-19 13:16:15.750517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec7390 is same with the state(6) to be set 00:24:12.429 [2024-11-19 13:16:15.750527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec7390 (9): Bad file descriptor 00:24:12.430 [2024-11-19 13:16:15.750536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:12.430 [2024-11-19 13:16:15.750543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:12.430 [2024-11-19 13:16:15.750550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:12.430 [2024-11-19 13:16:15.750556] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:12.430 [2024-11-19 13:16:15.750560] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:12.430 [2024-11-19 13:16:15.750564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:12.430 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.430 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:12.430 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.430 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:12.430 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.430 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:12.430 [2024-11-19 13:16:15.760430] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:12.430 [2024-11-19 13:16:15.760444] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:12.430 [2024-11-19 13:16:15.760448] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:12.430 [2024-11-19 13:16:15.760452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:12.430 [2024-11-19 13:16:15.760467] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:12.430 [2024-11-19 13:16:15.760634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.430 [2024-11-19 13:16:15.760646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec7390 with addr=10.0.0.2, port=4420 00:24:12.430 [2024-11-19 13:16:15.760654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec7390 is same with the state(6) to be set 00:24:12.430 [2024-11-19 13:16:15.760665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec7390 (9): Bad file descriptor 00:24:12.430 [2024-11-19 13:16:15.760675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:12.430 [2024-11-19 13:16:15.760682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:12.430 [2024-11-19 13:16:15.760689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:12.430 [2024-11-19 13:16:15.760701] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:12.430 [2024-11-19 13:16:15.760706] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:12.430 [2024-11-19 13:16:15.760710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:12.430 [2024-11-19 13:16:15.770498] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:12.430 [2024-11-19 13:16:15.770508] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:12.430 [2024-11-19 13:16:15.770512] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:12.430 [2024-11-19 13:16:15.770516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:12.430 [2024-11-19 13:16:15.770529] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:12.430 [2024-11-19 13:16:15.770706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.430 [2024-11-19 13:16:15.770717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec7390 with addr=10.0.0.2, port=4420 00:24:12.430 [2024-11-19 13:16:15.770725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec7390 is same with the state(6) to be set 00:24:12.430 [2024-11-19 13:16:15.770736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec7390 (9): Bad file descriptor 00:24:12.430 [2024-11-19 13:16:15.770746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:12.430 [2024-11-19 13:16:15.770753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:12.430 [2024-11-19 13:16:15.770759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:12.430 [2024-11-19 13:16:15.770765] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:12.430 [2024-11-19 13:16:15.770770] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:12.430 [2024-11-19 13:16:15.770774] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:12.430 [2024-11-19 13:16:15.777051] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:12.430 [2024-11-19 13:16:15.777066] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:12.430 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.430 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:12.430 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.430 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:12.430 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:12.430 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.430 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.430 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:12.430 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:12.430 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:12.690 13:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.690 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:12.690 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.690 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:12.690 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:12.690 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:12.690 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:12.690 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:12.690 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:12.690 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:12.690 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:12.690 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:12.690 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:12.690 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.690 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.690 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.690 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:12.690 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:12.690 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:12.690 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:12.690 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:12.690 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.690 13:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.068 [2024-11-19 13:16:17.116524] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:14.068 [2024-11-19 13:16:17.116541] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:14.068 [2024-11-19 13:16:17.116551] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:14.068 [2024-11-19 13:16:17.244966] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:14.328 [2024-11-19 13:16:17.510200] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:24:14.328 [2024-11-19 13:16:17.510823] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1efd1b0:1 started. 00:24:14.328 [2024-11-19 13:16:17.512476] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:14.328 [2024-11-19 13:16:17.512501] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:14.328 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.328 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:14.328 [2024-11-19 13:16:17.514494] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1efd1b0 was disconnected and freed. delete nvme_qpair. 00:24:14.328 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:14.328 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:14.328 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:14.328 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:14.328 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:14.328 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:14.328 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:14.328 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.328 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.328 request: 00:24:14.328 { 00:24:14.328 "name": "nvme", 00:24:14.328 "trtype": "tcp", 00:24:14.328 "traddr": "10.0.0.2", 00:24:14.328 "adrfam": "ipv4", 00:24:14.328 "trsvcid": "8009", 00:24:14.328 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:14.328 "wait_for_attach": true, 00:24:14.328 "method": "bdev_nvme_start_discovery", 00:24:14.328 "req_id": 1 00:24:14.328 } 00:24:14.328 Got JSON-RPC error response 00:24:14.328 response: 00:24:14.328 { 00:24:14.328 "code": -17, 00:24:14.328 "message": "File exists" 00:24:14.328 } 00:24:14.328 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:14.328 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:14.328 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:14.328 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:14.328 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:14.328 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:14.328 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:14.328 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:14.328 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.329 request: 00:24:14.329 { 00:24:14.329 "name": "nvme_second", 00:24:14.329 "trtype": "tcp", 00:24:14.329 "traddr": "10.0.0.2", 00:24:14.329 "adrfam": "ipv4", 00:24:14.329 "trsvcid": "8009", 00:24:14.329 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:14.329 "wait_for_attach": true, 00:24:14.329 "method": "bdev_nvme_start_discovery", 00:24:14.329 "req_id": 1 00:24:14.329 } 00:24:14.329 Got JSON-RPC error response 00:24:14.329 response: 00:24:14.329 { 00:24:14.329 "code": -17, 00:24:14.329 "message": "File exists" 00:24:14.329 } 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:14.329 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:14.588 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.588 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:14.588 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:14.588 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:14.588 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:14.588 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:14.588 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:14.588 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:14.588 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:14.588 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:14.588 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.588 13:16:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.525 [2024-11-19 13:16:18.756259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.525 [2024-11-19 13:16:18.756286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec4b60 with addr=10.0.0.2, port=8010 00:24:15.525 [2024-11-19 13:16:18.756302] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:15.525 [2024-11-19 13:16:18.756309] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:15.525 [2024-11-19 13:16:18.756316] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:16.461 [2024-11-19 13:16:19.758627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.461 [2024-11-19 13:16:19.758652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec4b60 with addr=10.0.0.2, port=8010 00:24:16.461 [2024-11-19 13:16:19.758671] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:16.462 [2024-11-19 13:16:19.758678] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:16.462 [2024-11-19 13:16:19.758684] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:17.398 [2024-11-19 13:16:20.760863] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:17.398 request: 00:24:17.398 { 00:24:17.398 "name": "nvme_second", 00:24:17.398 "trtype": "tcp", 00:24:17.398 "traddr": "10.0.0.2", 00:24:17.398 "adrfam": "ipv4", 00:24:17.398 "trsvcid": "8010", 00:24:17.398 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:17.398 "wait_for_attach": false, 00:24:17.398 "attach_timeout_ms": 3000, 00:24:17.398 "method": "bdev_nvme_start_discovery", 00:24:17.398 "req_id": 1 00:24:17.398 } 00:24:17.398 Got JSON-RPC error response 00:24:17.398 response: 00:24:17.398 { 00:24:17.398 "code": -110, 00:24:17.398 "message": "Connection timed out" 00:24:17.398 } 00:24:17.398 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:17.398 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:17.398 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:17.398 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:17.398 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:17.398 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:17.398 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:17.398 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:17.398 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.398 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:17.398 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.398 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:17.657 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.657 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:17.657 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:17.657 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2949089 00:24:17.657 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:17.657 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:17.657 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:17.657 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:17.657 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:17.657 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:17.657 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:17.657 rmmod nvme_tcp 00:24:17.658 rmmod nvme_fabrics 00:24:17.658 rmmod nvme_keyring 00:24:17.658 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:17.658 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:17.658 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:17.658 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2949064 ']' 00:24:17.658 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2949064 00:24:17.658 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2949064 ']' 00:24:17.658 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2949064 00:24:17.658 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:24:17.658 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.658 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2949064 00:24:17.658 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:17.658 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:17.658 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2949064' 00:24:17.658 killing process with pid 2949064 00:24:17.658 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2949064 00:24:17.658 13:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2949064 00:24:17.918 13:16:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:17.918 13:16:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:17.918 13:16:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:17.918 13:16:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:17.918 13:16:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:17.918 13:16:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:17.918 13:16:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:17.918 13:16:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:17.918 13:16:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:17.918 13:16:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.918 13:16:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.918 13:16:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.823 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:19.823 00:24:19.823 real 0m17.275s 00:24:19.823 user 0m20.655s 00:24:19.823 sys 0m5.805s 00:24:19.823 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:19.823 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.823 ************************************ 00:24:19.823 END TEST nvmf_host_discovery 00:24:19.823 ************************************ 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.083 ************************************ 00:24:20.083 START TEST nvmf_host_multipath_status 00:24:20.083 ************************************ 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:20.083 * Looking for test storage... 00:24:20.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:20.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.083 --rc genhtml_branch_coverage=1 00:24:20.083 --rc genhtml_function_coverage=1 00:24:20.083 --rc genhtml_legend=1 00:24:20.083 --rc geninfo_all_blocks=1 00:24:20.083 --rc geninfo_unexecuted_blocks=1 00:24:20.083 00:24:20.083 ' 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:20.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.083 --rc genhtml_branch_coverage=1 00:24:20.083 --rc genhtml_function_coverage=1 00:24:20.083 --rc genhtml_legend=1 00:24:20.083 --rc geninfo_all_blocks=1 00:24:20.083 --rc geninfo_unexecuted_blocks=1 00:24:20.083 00:24:20.083 ' 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:20.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.083 --rc genhtml_branch_coverage=1 00:24:20.083 --rc genhtml_function_coverage=1 00:24:20.083 --rc genhtml_legend=1 00:24:20.083 --rc geninfo_all_blocks=1 00:24:20.083 --rc geninfo_unexecuted_blocks=1 00:24:20.083 00:24:20.083 ' 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:20.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.083 --rc genhtml_branch_coverage=1 00:24:20.083 --rc genhtml_function_coverage=1 00:24:20.083 --rc genhtml_legend=1 00:24:20.083 --rc geninfo_all_blocks=1 00:24:20.083 --rc geninfo_unexecuted_blocks=1 00:24:20.083 00:24:20.083 ' 00:24:20.083 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:20.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:24:20.084 13:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:26.661 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:26.661 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:26.662 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:26.662 Found net devices under 0000:86:00.0: cvl_0_0 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:26.662 Found net devices under 0000:86:00.1: cvl_0_1 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:26.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:26.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:24:26.662 00:24:26.662 --- 10.0.0.2 ping statistics --- 00:24:26.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.662 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:26.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:26.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:24:26.662 00:24:26.662 --- 10.0.0.1 ping statistics --- 00:24:26.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.662 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2954163 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2954163 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2954163 ']' 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.662 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:26.663 13:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:26.663 [2024-11-19 13:16:29.371143] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:24:26.663 [2024-11-19 13:16:29.371194] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.663 [2024-11-19 13:16:29.455053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:26.663 [2024-11-19 13:16:29.496257] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.663 [2024-11-19 13:16:29.496292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.663 [2024-11-19 13:16:29.496300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.663 [2024-11-19 13:16:29.496306] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.663 [2024-11-19 13:16:29.496311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.663 [2024-11-19 13:16:29.497503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.663 [2024-11-19 13:16:29.497505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.921 13:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.921 13:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:26.922 13:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:26.922 13:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:26.922 13:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:26.922 13:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.922 13:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2954163 00:24:26.922 13:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:27.181 [2024-11-19 13:16:30.411237] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.181 13:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:27.440 Malloc0 00:24:27.440 13:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:27.699 13:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:27.699 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.958 [2024-11-19 13:16:31.236332] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.958 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:28.217 [2024-11-19 13:16:31.444864] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:28.217 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:28.217 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2954638 00:24:28.217 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:28.217 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2954638 /var/tmp/bdevperf.sock 00:24:28.217 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2954638 ']' 00:24:28.217 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:28.217 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.217 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:28.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:28.217 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.217 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:28.476 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.476 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:28.476 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:28.822 13:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:29.081 Nvme0n1 00:24:29.081 13:16:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:29.649 Nvme0n1 00:24:29.649 13:16:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:29.649 13:16:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:31.554 13:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:31.554 13:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:31.814 13:16:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:32.073 13:16:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:33.011 13:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:33.011 13:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:33.011 13:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.011 13:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:33.270 13:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.270 13:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:33.270 13:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.270 13:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:33.529 13:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:33.529 13:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:33.529 13:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.529 13:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:33.788 13:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.789 13:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:33.789 13:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.789 13:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:33.789 13:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.789 13:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:33.789 13:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.789 13:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:34.048 13:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.048 13:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:34.048 13:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.048 13:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:34.308 13:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.308 13:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:34.308 13:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:34.567 13:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:34.827 13:16:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:35.765 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:35.765 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:35.765 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.765 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:36.024 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:36.024 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:36.024 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.024 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:36.282 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.282 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:36.282 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.282 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:36.282 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.282 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:36.282 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.282 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:36.542 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.542 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:36.542 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.542 13:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:36.802 13:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.802 13:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:36.802 13:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.802 13:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:37.061 13:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.061 13:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:37.061 13:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:37.321 13:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:37.321 13:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:38.699 13:16:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:38.700 13:16:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:38.700 13:16:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.700 13:16:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:38.700 13:16:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.700 13:16:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:38.700 13:16:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.700 13:16:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:38.958 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:38.958 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:38.958 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.959 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:38.959 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.959 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:38.959 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.959 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:39.217 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.217 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:39.217 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.218 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:39.477 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.477 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:39.477 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.477 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:39.736 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.736 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:39.736 13:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:39.995 13:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:39.995 13:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:41.371 13:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:41.371 13:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:41.371 13:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.371 13:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:41.371 13:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.371 13:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:41.371 13:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.371 13:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:41.631 13:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:41.631 13:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:41.631 13:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.631 13:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:41.631 13:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.631 13:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:41.631 13:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.631 13:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:41.890 13:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.890 13:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:41.890 13:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.890 13:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:42.149 13:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.149 13:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:42.149 13:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:42.149 13:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.408 13:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:42.408 13:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:42.408 13:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:42.667 13:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:42.667 13:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:44.044 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:44.044 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:44.044 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.044 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:44.044 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:44.044 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:44.044 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.044 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:44.315 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:44.315 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:44.315 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.315 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:44.315 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.315 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:44.315 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.315 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:44.572 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.572 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:44.572 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.572 13:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:44.831 13:16:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:44.831 13:16:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:44.831 13:16:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.831 13:16:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:45.090 13:16:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:45.090 13:16:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:45.090 13:16:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:45.090 13:16:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:45.348 13:16:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:46.284 13:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:46.284 13:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:46.284 13:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.284 13:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:46.544 13:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:46.544 13:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:46.544 13:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.544 13:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:46.803 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.803 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:46.803 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.803 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:47.062 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.062 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:47.062 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:47.062 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.320 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.320 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:47.320 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:47.320 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.578 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:47.578 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:47.578 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.578 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:47.578 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.578 13:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:47.837 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:47.837 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:48.097 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:48.356 13:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:49.294 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:49.294 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:49.294 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.294 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:49.554 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.554 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:49.554 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.554 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:49.814 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.814 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:49.814 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.814 13:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:50.073 13:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.073 13:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:50.073 13:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.073 13:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:50.073 13:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.073 13:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:50.073 13:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.073 13:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:50.332 13:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.332 13:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:50.332 13:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.332 13:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:50.591 13:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.591 13:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:50.591 13:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:50.851 13:16:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:51.110 13:16:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:52.049 13:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:52.049 13:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:52.049 13:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.049 13:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:52.308 13:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:52.308 13:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:52.308 13:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.308 13:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:52.567 13:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.567 13:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:52.567 13:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.567 13:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:52.567 13:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.567 13:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:52.567 13:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.567 13:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:52.826 13:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.826 13:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:52.826 13:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.826 13:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:53.086 13:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.086 13:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:53.086 13:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.086 13:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:53.345 13:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.345 13:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:53.345 13:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:53.605 13:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:53.605 13:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:54.986 13:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:54.986 13:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:54.986 13:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.986 13:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:54.986 13:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.986 13:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:54.986 13:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.986 13:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:55.245 13:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.245 13:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:55.245 13:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.245 13:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:55.245 13:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.245 13:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:55.245 13:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:55.245 13:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.505 13:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.505 13:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:55.505 13:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.505 13:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:55.764 13:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.764 13:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:55.764 13:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.764 13:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:56.026 13:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.026 13:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:56.026 13:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:56.289 13:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:56.548 13:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:57.487 13:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:57.487 13:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:57.487 13:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.487 13:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:57.746 13:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.746 13:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:57.746 13:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.746 13:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:58.005 13:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:58.005 13:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:58.005 13:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.005 13:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:58.005 13:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.005 13:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:58.005 13:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.005 13:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:58.264 13:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.264 13:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:58.264 13:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.264 13:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:58.523 13:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.523 13:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:58.523 13:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.523 13:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:58.783 13:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:58.783 13:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2954638 00:24:58.783 13:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2954638 ']' 00:24:58.783 13:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2954638 00:24:58.783 13:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:58.783 13:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:58.783 13:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2954638 00:24:58.783 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:58.783 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:58.783 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2954638' 00:24:58.783 killing process with pid 2954638 00:24:58.783 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2954638 00:24:58.783 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2954638 00:24:58.783 { 00:24:58.783 "results": [ 00:24:58.783 { 00:24:58.783 "job": "Nvme0n1", 00:24:58.783 "core_mask": "0x4", 00:24:58.783 "workload": "verify", 00:24:58.783 "status": "terminated", 00:24:58.783 "verify_range": { 00:24:58.783 "start": 0, 00:24:58.783 "length": 16384 00:24:58.783 }, 00:24:58.783 "queue_depth": 128, 00:24:58.783 "io_size": 4096, 00:24:58.783 "runtime": 29.027439, 00:24:58.783 "iops": 10459.620636873959, 00:24:58.783 "mibps": 40.8578931127889, 00:24:58.783 "io_failed": 0, 00:24:58.783 "io_timeout": 0, 00:24:58.783 "avg_latency_us": 12217.54738323924, 00:24:58.783 "min_latency_us": 205.6904347826087, 00:24:58.783 "max_latency_us": 3019898.88 00:24:58.783 } 00:24:58.783 ], 00:24:58.783 "core_count": 1 00:24:58.783 } 00:24:59.046 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2954638 00:24:59.046 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:59.046 [2024-11-19 13:16:31.514740] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:24:59.046 [2024-11-19 13:16:31.514795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2954638 ] 00:24:59.046 [2024-11-19 13:16:31.591930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.046 [2024-11-19 13:16:31.633280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:59.046 Running I/O for 90 seconds... 00:24:59.046 11426.00 IOPS, 44.63 MiB/s [2024-11-19T12:17:02.423Z] 11351.50 IOPS, 44.34 MiB/s [2024-11-19T12:17:02.423Z] 11354.67 IOPS, 44.35 MiB/s [2024-11-19T12:17:02.423Z] 11359.25 IOPS, 44.37 MiB/s [2024-11-19T12:17:02.423Z] 11334.20 IOPS, 44.27 MiB/s [2024-11-19T12:17:02.423Z] 11327.33 IOPS, 44.25 MiB/s [2024-11-19T12:17:02.423Z] 11295.43 IOPS, 44.12 MiB/s [2024-11-19T12:17:02.423Z] 11263.50 IOPS, 44.00 MiB/s [2024-11-19T12:17:02.423Z] 11279.11 IOPS, 44.06 MiB/s [2024-11-19T12:17:02.423Z] 11298.60 IOPS, 44.14 MiB/s [2024-11-19T12:17:02.423Z] 11294.00 IOPS, 44.12 MiB/s [2024-11-19T12:17:02.423Z] 11299.75 IOPS, 44.14 MiB/s [2024-11-19T12:17:02.423Z] [2024-11-19 13:16:45.792448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.046 [2024-11-19 13:16:45.792486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:59.046 [2024-11-19 13:16:45.792522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.046 [2024-11-19 13:16:45.792531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:59.046 [2024-11-19 13:16:45.792544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.046 [2024-11-19 13:16:45.792552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:59.046 [2024-11-19 13:16:45.792564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.046 [2024-11-19 13:16:45.792571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:59.046 [2024-11-19 13:16:45.792584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.046 [2024-11-19 13:16:45.792591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:59.046 [2024-11-19 13:16:45.792604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.046 [2024-11-19 13:16:45.792610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:59.046 [2024-11-19 13:16:45.792623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.046 [2024-11-19 13:16:45.792630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:59.046 [2024-11-19 13:16:45.792643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.046 [2024-11-19 13:16:45.792650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:59.046 [2024-11-19 13:16:45.793207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.046 [2024-11-19 13:16:45.793224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:59.046 [2024-11-19 13:16:45.793239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.046 [2024-11-19 13:16:45.793251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:59.046 [2024-11-19 13:16:45.793266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.046 [2024-11-19 13:16:45.793273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:59.046 [2024-11-19 13:16:45.793286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.046 [2024-11-19 13:16:45.793294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:59.046 [2024-11-19 13:16:45.793307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.046 [2024-11-19 13:16:45.793314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:59.046 [2024-11-19 13:16:45.793326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.793984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.793991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.794005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.794013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.794026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.794033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.794047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.794054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.794067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.794075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.794087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.794094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:59.047 [2024-11-19 13:16:45.794107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.047 [2024-11-19 13:16:45.794114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.048 [2024-11-19 13:16:45.794251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.048 [2024-11-19 13:16:45.794273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.048 [2024-11-19 13:16:45.794297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.048 [2024-11-19 13:16:45.794320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.048 [2024-11-19 13:16:45.794342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.048 [2024-11-19 13:16:45.794366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.048 [2024-11-19 13:16:45.794389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.794989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.794996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.795012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.795020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.795035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.795041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.795057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.048 [2024-11-19 13:16:45.795065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.048 [2024-11-19 13:16:45.795149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.049 [2024-11-19 13:16:45.795159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.049 [2024-11-19 13:16:45.795185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.049 [2024-11-19 13:16:45.795208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.049 [2024-11-19 13:16:45.795233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.049 [2024-11-19 13:16:45.795257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.049 [2024-11-19 13:16:45.795284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.049 [2024-11-19 13:16:45.795308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.049 [2024-11-19 13:16:45.795333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.049 [2024-11-19 13:16:45.795357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.049 [2024-11-19 13:16:45.795381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.049 [2024-11-19 13:16:45.795406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.049 [2024-11-19 13:16:45.795430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.049 [2024-11-19 13:16:45.795454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.049 [2024-11-19 13:16:45.795478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.049 [2024-11-19 13:16:45.795502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.049 [2024-11-19 13:16:45.795527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.049 [2024-11-19 13:16:45.795590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.049 [2024-11-19 13:16:45.795621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.049 [2024-11-19 13:16:45.795654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.049 [2024-11-19 13:16:45.795679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.049 [2024-11-19 13:16:45.795704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.049 [2024-11-19 13:16:45.795729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.049 [2024-11-19 13:16:45.795755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.049 [2024-11-19 13:16:45.795781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.049 [2024-11-19 13:16:45.795805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.049 [2024-11-19 13:16:45.795830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.049 [2024-11-19 13:16:45.795856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.049 [2024-11-19 13:16:45.795881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.049 [2024-11-19 13:16:45.795906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.049 [2024-11-19 13:16:45.795933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.049 [2024-11-19 13:16:45.795961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.795979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.049 [2024-11-19 13:16:45.795987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.796005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.049 [2024-11-19 13:16:45.796013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.796031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.049 [2024-11-19 13:16:45.796037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.796057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.049 [2024-11-19 13:16:45.796065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.796083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.049 [2024-11-19 13:16:45.796089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.796107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.049 [2024-11-19 13:16:45.796115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:59.049 [2024-11-19 13:16:45.796133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.049 [2024-11-19 13:16:45.796140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:59.049 11136.00 IOPS, 43.50 MiB/s [2024-11-19T12:17:02.426Z] 10340.57 IOPS, 40.39 MiB/s [2024-11-19T12:17:02.426Z] 9651.20 IOPS, 37.70 MiB/s [2024-11-19T12:17:02.426Z] 9178.06 IOPS, 35.85 MiB/s [2024-11-19T12:17:02.426Z] 9290.71 IOPS, 36.29 MiB/s [2024-11-19T12:17:02.426Z] 9391.00 IOPS, 36.68 MiB/s [2024-11-19T12:17:02.427Z] 9553.84 IOPS, 37.32 MiB/s [2024-11-19T12:17:02.427Z] 9734.80 IOPS, 38.03 MiB/s [2024-11-19T12:17:02.427Z] 9897.67 IOPS, 38.66 MiB/s [2024-11-19T12:17:02.427Z] 9978.59 IOPS, 38.98 MiB/s [2024-11-19T12:17:02.427Z] 10033.00 IOPS, 39.19 MiB/s [2024-11-19T12:17:02.427Z] 10083.75 IOPS, 39.39 MiB/s [2024-11-19T12:17:02.427Z] 10206.24 IOPS, 39.87 MiB/s [2024-11-19T12:17:02.427Z] 10325.08 IOPS, 40.33 MiB/s [2024-11-19T12:17:02.427Z] [2024-11-19 13:16:59.688956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.688996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.689030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.689039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.689057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.689064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.689077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.689085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.689098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.689106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.689118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.689125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.689137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.689145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.689158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.689166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.689179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.689186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.689199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.689206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.689219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.689227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.689239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.689246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.689258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.689266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.689279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.689287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.689300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.689310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.689324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.689332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.689345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.689352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.689366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.689374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.689386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.689393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.690292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.050 [2024-11-19 13:16:59.690312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.690328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.050 [2024-11-19 13:16:59.690336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.690349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.050 [2024-11-19 13:16:59.690356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.690370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.050 [2024-11-19 13:16:59.690377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.690390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.050 [2024-11-19 13:16:59.690397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.690410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.690418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.690431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.690438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.690450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.690461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.690473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.690481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.690494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.690501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.690514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.690520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.690533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.690543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.690555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.690562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.690575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.050 [2024-11-19 13:16:59.690583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:59.050 [2024-11-19 13:16:59.690597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.690604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.690617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.690624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.690636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.690644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.690657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.690664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.690676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.051 [2024-11-19 13:16:59.690683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.690696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.051 [2024-11-19 13:16:59.690704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.690719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.690727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.690739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.690747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.690760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.690767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.690780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.690787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.690799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.690806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.690819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.690826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.690839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.690845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.690858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.690865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.690879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:122000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.690886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.690898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.690906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.690919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:122032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.690927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.690939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.690952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.690967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:122064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.690975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.690987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.690994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.691007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.051 [2024-11-19 13:16:59.691014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.691027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.051 [2024-11-19 13:16:59.691035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.691803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:122096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.691818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.691834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:122112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.691841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.691854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:122128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.691863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.691875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.691882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.691895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.691902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.691915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:122176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.691923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.691936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.691943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.051 [2024-11-19 13:16:59.691964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:122208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.051 [2024-11-19 13:16:59.691971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.051 10406.11 IOPS, 40.65 MiB/s [2024-11-19T12:17:02.428Z] 10431.29 IOPS, 40.75 MiB/s [2024-11-19T12:17:02.428Z] 10459.52 IOPS, 40.86 MiB/s [2024-11-19T12:17:02.428Z] Received shutdown signal, test time was about 29.028104 seconds 00:24:59.051 00:24:59.051 Latency(us) 00:24:59.051 [2024-11-19T12:17:02.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.051 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:59.051 Verification LBA range: start 0x0 length 0x4000 00:24:59.051 Nvme0n1 : 29.03 10459.62 40.86 0.00 0.00 12217.55 205.69 3019898.88 00:24:59.051 [2024-11-19T12:17:02.428Z] =================================================================================================================== 00:24:59.051 [2024-11-19T12:17:02.428Z] Total : 10459.62 40.86 0.00 0.00 12217.55 205.69 3019898.88 00:24:59.051 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:59.051 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:59.051 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:59.311 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:59.311 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:59.311 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:59.311 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:59.311 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:59.311 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:59.311 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:59.311 rmmod nvme_tcp 00:24:59.311 rmmod nvme_fabrics 00:24:59.311 rmmod nvme_keyring 00:24:59.311 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:59.311 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:59.311 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:59.311 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2954163 ']' 00:24:59.311 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2954163 00:24:59.311 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2954163 ']' 00:24:59.311 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2954163 00:24:59.311 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:59.311 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:59.311 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2954163 00:24:59.311 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:59.311 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:59.311 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2954163' 00:24:59.311 killing process with pid 2954163 00:24:59.311 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2954163 00:24:59.311 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2954163 00:24:59.570 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:59.571 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:59.571 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:59.571 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:59.571 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:59.571 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:59.571 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:59.571 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:59.571 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:59.571 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.571 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.571 13:17:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.477 13:17:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:01.477 00:25:01.477 real 0m41.546s 00:25:01.477 user 1m52.850s 00:25:01.477 sys 0m11.493s 00:25:01.477 13:17:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:01.477 13:17:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:01.477 ************************************ 00:25:01.477 END TEST nvmf_host_multipath_status 00:25:01.477 ************************************ 00:25:01.477 13:17:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:01.477 13:17:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:01.477 13:17:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:01.477 13:17:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.736 ************************************ 00:25:01.736 START TEST nvmf_discovery_remove_ifc 00:25:01.736 ************************************ 00:25:01.736 13:17:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:01.736 * Looking for test storage... 00:25:01.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:01.737 13:17:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:01.737 13:17:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:25:01.737 13:17:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:01.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.737 --rc genhtml_branch_coverage=1 00:25:01.737 --rc genhtml_function_coverage=1 00:25:01.737 --rc genhtml_legend=1 00:25:01.737 --rc geninfo_all_blocks=1 00:25:01.737 --rc geninfo_unexecuted_blocks=1 00:25:01.737 00:25:01.737 ' 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:01.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.737 --rc genhtml_branch_coverage=1 00:25:01.737 --rc genhtml_function_coverage=1 00:25:01.737 --rc genhtml_legend=1 00:25:01.737 --rc geninfo_all_blocks=1 00:25:01.737 --rc geninfo_unexecuted_blocks=1 00:25:01.737 00:25:01.737 ' 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:01.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.737 --rc genhtml_branch_coverage=1 00:25:01.737 --rc genhtml_function_coverage=1 00:25:01.737 --rc genhtml_legend=1 00:25:01.737 --rc geninfo_all_blocks=1 00:25:01.737 --rc geninfo_unexecuted_blocks=1 00:25:01.737 00:25:01.737 ' 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:01.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.737 --rc genhtml_branch_coverage=1 00:25:01.737 --rc genhtml_function_coverage=1 00:25:01.737 --rc genhtml_legend=1 00:25:01.737 --rc geninfo_all_blocks=1 00:25:01.737 --rc geninfo_unexecuted_blocks=1 00:25:01.737 00:25:01.737 ' 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:01.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:01.737 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:01.738 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:01.738 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:01.738 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:01.738 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:01.738 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:01.738 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:01.738 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:01.738 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:01.738 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:01.738 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:01.738 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:01.738 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:01.738 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:01.738 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:01.738 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.738 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.738 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.738 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:01.738 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:01.738 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:01.738 13:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:08.313 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:08.313 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:08.313 Found net devices under 0000:86:00.0: cvl_0_0 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:08.313 Found net devices under 0000:86:00.1: cvl_0_1 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.313 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:08.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:25:08.314 00:25:08.314 --- 10.0.0.2 ping statistics --- 00:25:08.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.314 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:25:08.314 00:25:08.314 --- 10.0.0.1 ping statistics --- 00:25:08.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.314 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:08.314 13:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2963213 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2963213 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2963213 ']' 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.314 [2024-11-19 13:17:11.048304] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:25:08.314 [2024-11-19 13:17:11.048351] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.314 [2024-11-19 13:17:11.126944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.314 [2024-11-19 13:17:11.167990] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.314 [2024-11-19 13:17:11.168027] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.314 [2024-11-19 13:17:11.168035] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.314 [2024-11-19 13:17:11.168041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.314 [2024-11-19 13:17:11.168046] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.314 [2024-11-19 13:17:11.168582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.314 [2024-11-19 13:17:11.319173] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.314 [2024-11-19 13:17:11.327347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:08.314 null0 00:25:08.314 [2024-11-19 13:17:11.359345] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2963432 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2963432 /tmp/host.sock 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2963432 ']' 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:08.314 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.314 [2024-11-19 13:17:11.428445] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:25:08.314 [2024-11-19 13:17:11.428487] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2963432 ] 00:25:08.314 [2024-11-19 13:17:11.501630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.314 [2024-11-19 13:17:11.544435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:08.314 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.315 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:08.315 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.315 13:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:09.694 [2024-11-19 13:17:12.685310] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:09.694 [2024-11-19 13:17:12.685329] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:09.694 [2024-11-19 13:17:12.685349] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:09.694 [2024-11-19 13:17:12.771610] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:09.694 [2024-11-19 13:17:12.866238] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:09.694 [2024-11-19 13:17:12.867023] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x25259f0:1 started. 00:25:09.694 [2024-11-19 13:17:12.868354] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:09.694 [2024-11-19 13:17:12.868393] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:09.694 [2024-11-19 13:17:12.868412] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:09.694 [2024-11-19 13:17:12.868423] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:09.694 [2024-11-19 13:17:12.868439] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:09.694 13:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.694 13:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:09.694 13:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:09.694 13:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.694 [2024-11-19 13:17:12.874561] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x25259f0 was disconnected and freed. delete nvme_qpair. 00:25:09.694 13:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:09.694 13:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.695 13:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:09.695 13:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:09.695 13:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:09.695 13:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.695 13:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:09.695 13:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:09.695 13:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:09.695 13:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:09.695 13:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:09.695 13:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.695 13:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:09.695 13:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.695 13:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:09.695 13:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:09.695 13:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:09.695 13:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.695 13:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:09.695 13:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:11.072 13:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:11.072 13:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:11.072 13:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:11.072 13:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.072 13:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:11.072 13:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:11.072 13:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:11.072 13:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.072 13:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:11.072 13:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:12.009 13:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:12.010 13:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:12.010 13:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:12.010 13:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.010 13:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:12.010 13:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:12.010 13:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:12.010 13:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.010 13:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:12.010 13:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:12.947 13:17:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:12.947 13:17:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:12.947 13:17:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:12.947 13:17:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.947 13:17:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:12.947 13:17:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:12.947 13:17:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:12.947 13:17:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.947 13:17:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:12.947 13:17:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:13.939 13:17:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:13.939 13:17:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.939 13:17:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:13.939 13:17:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.939 13:17:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:13.939 13:17:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:13.939 13:17:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:13.939 13:17:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.939 13:17:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:13.939 13:17:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:14.928 13:17:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:14.928 13:17:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:14.928 13:17:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:14.928 13:17:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.928 13:17:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:14.928 13:17:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:14.928 13:17:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:14.928 13:17:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.188 13:17:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:15.188 13:17:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:15.188 [2024-11-19 13:17:18.309969] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:15.188 [2024-11-19 13:17:18.310005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.188 [2024-11-19 13:17:18.310015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.188 [2024-11-19 13:17:18.310023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.188 [2024-11-19 13:17:18.310030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.188 [2024-11-19 13:17:18.310037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.188 [2024-11-19 13:17:18.310044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.188 [2024-11-19 13:17:18.310051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.188 [2024-11-19 13:17:18.310058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.188 [2024-11-19 13:17:18.310065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.188 [2024-11-19 13:17:18.310072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.188 [2024-11-19 13:17:18.310079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2502220 is same with the state(6) to be set 00:25:15.188 [2024-11-19 13:17:18.319991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2502220 (9): Bad file descriptor 00:25:15.188 [2024-11-19 13:17:18.330025] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:15.188 [2024-11-19 13:17:18.330037] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:15.188 [2024-11-19 13:17:18.330042] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:15.188 [2024-11-19 13:17:18.330046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:15.188 [2024-11-19 13:17:18.330071] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:16.125 13:17:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:16.125 13:17:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:16.125 13:17:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:16.125 13:17:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.125 13:17:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:16.125 13:17:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:16.125 13:17:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:16.125 [2024-11-19 13:17:19.344403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:16.125 [2024-11-19 13:17:19.344470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2502220 with addr=10.0.0.2, port=4420 00:25:16.125 [2024-11-19 13:17:19.344500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2502220 is same with the state(6) to be set 00:25:16.125 [2024-11-19 13:17:19.344551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2502220 (9): Bad file descriptor 00:25:16.126 [2024-11-19 13:17:19.345488] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:25:16.126 [2024-11-19 13:17:19.345551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:16.126 [2024-11-19 13:17:19.345575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:16.126 [2024-11-19 13:17:19.345597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:16.126 [2024-11-19 13:17:19.345617] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:16.126 [2024-11-19 13:17:19.345633] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:16.126 [2024-11-19 13:17:19.345646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:16.126 [2024-11-19 13:17:19.345666] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:16.126 [2024-11-19 13:17:19.345680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:16.126 13:17:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.126 13:17:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:16.126 13:17:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:17.061 [2024-11-19 13:17:20.348203] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:17.061 [2024-11-19 13:17:20.348230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:17.061 [2024-11-19 13:17:20.348243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:17.061 [2024-11-19 13:17:20.348251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:17.061 [2024-11-19 13:17:20.348259] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:25:17.061 [2024-11-19 13:17:20.348266] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:17.061 [2024-11-19 13:17:20.348271] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:17.061 [2024-11-19 13:17:20.348279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:17.061 [2024-11-19 13:17:20.348304] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:17.061 [2024-11-19 13:17:20.348328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.061 [2024-11-19 13:17:20.348338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.061 [2024-11-19 13:17:20.348347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.061 [2024-11-19 13:17:20.348355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.061 [2024-11-19 13:17:20.348363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.061 [2024-11-19 13:17:20.348370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.061 [2024-11-19 13:17:20.348377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.061 [2024-11-19 13:17:20.348384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.061 [2024-11-19 13:17:20.348394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.061 [2024-11-19 13:17:20.348400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.061 [2024-11-19 13:17:20.348407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:25:17.061 [2024-11-19 13:17:20.348838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f1900 (9): Bad file descriptor 00:25:17.061 [2024-11-19 13:17:20.349848] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:17.061 [2024-11-19 13:17:20.349860] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:25:17.061 13:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:17.061 13:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:17.061 13:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.061 13:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.061 13:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:17.061 13:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.061 13:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:17.061 13:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.061 13:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:17.061 13:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:17.061 13:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:17.320 13:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:17.320 13:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:17.320 13:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.320 13:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:17.320 13:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:17.320 13:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.320 13:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:17.320 13:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.320 13:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.320 13:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:17.320 13:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:18.258 13:17:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:18.258 13:17:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.258 13:17:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:18.258 13:17:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.258 13:17:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:18.258 13:17:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.258 13:17:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:18.258 13:17:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.258 13:17:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:18.258 13:17:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:19.194 [2024-11-19 13:17:22.401112] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:19.194 [2024-11-19 13:17:22.401128] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:19.194 [2024-11-19 13:17:22.401143] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:19.194 [2024-11-19 13:17:22.487406] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:19.454 [2024-11-19 13:17:22.583151] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:25:19.454 [2024-11-19 13:17:22.583693] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x24f6760:1 started. 00:25:19.454 [2024-11-19 13:17:22.584743] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:19.454 [2024-11-19 13:17:22.584774] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:19.454 [2024-11-19 13:17:22.584791] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:19.454 [2024-11-19 13:17:22.584804] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:19.454 [2024-11-19 13:17:22.584810] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:19.454 [2024-11-19 13:17:22.589413] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x24f6760 was disconnected and freed. delete nvme_qpair. 00:25:19.454 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:19.454 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.454 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:19.454 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.454 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:19.454 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:19.454 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:19.454 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.454 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:19.454 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:19.454 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2963432 00:25:19.454 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2963432 ']' 00:25:19.454 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2963432 00:25:19.454 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:19.454 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:19.454 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2963432 00:25:19.454 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:19.454 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:19.454 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2963432' 00:25:19.454 killing process with pid 2963432 00:25:19.454 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2963432 00:25:19.454 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2963432 00:25:19.713 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:19.713 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:19.713 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:19.713 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:19.713 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:19.713 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:19.713 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:19.713 rmmod nvme_tcp 00:25:19.713 rmmod nvme_fabrics 00:25:19.713 rmmod nvme_keyring 00:25:19.713 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:19.713 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:19.713 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:19.713 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2963213 ']' 00:25:19.713 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2963213 00:25:19.713 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2963213 ']' 00:25:19.713 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2963213 00:25:19.713 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:19.713 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:19.713 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2963213 00:25:19.713 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:19.713 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:19.713 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2963213' 00:25:19.713 killing process with pid 2963213 00:25:19.713 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2963213 00:25:19.713 13:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2963213 00:25:19.973 13:17:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:19.973 13:17:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:19.973 13:17:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:19.973 13:17:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:19.973 13:17:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:25:19.973 13:17:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:19.973 13:17:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:25:19.973 13:17:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:19.973 13:17:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:19.973 13:17:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.973 13:17:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.973 13:17:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.880 13:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:21.880 00:25:21.880 real 0m20.339s 00:25:21.880 user 0m24.406s 00:25:21.880 sys 0m5.886s 00:25:21.880 13:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:21.880 13:17:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:21.880 ************************************ 00:25:21.880 END TEST nvmf_discovery_remove_ifc 00:25:21.880 ************************************ 00:25:21.880 13:17:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:21.880 13:17:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:21.880 13:17:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:21.880 13:17:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.141 ************************************ 00:25:22.141 START TEST nvmf_identify_kernel_target 00:25:22.141 ************************************ 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:22.141 * Looking for test storage... 00:25:22.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:22.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.141 --rc genhtml_branch_coverage=1 00:25:22.141 --rc genhtml_function_coverage=1 00:25:22.141 --rc genhtml_legend=1 00:25:22.141 --rc geninfo_all_blocks=1 00:25:22.141 --rc geninfo_unexecuted_blocks=1 00:25:22.141 00:25:22.141 ' 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:22.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.141 --rc genhtml_branch_coverage=1 00:25:22.141 --rc genhtml_function_coverage=1 00:25:22.141 --rc genhtml_legend=1 00:25:22.141 --rc geninfo_all_blocks=1 00:25:22.141 --rc geninfo_unexecuted_blocks=1 00:25:22.141 00:25:22.141 ' 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:22.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.141 --rc genhtml_branch_coverage=1 00:25:22.141 --rc genhtml_function_coverage=1 00:25:22.141 --rc genhtml_legend=1 00:25:22.141 --rc geninfo_all_blocks=1 00:25:22.141 --rc geninfo_unexecuted_blocks=1 00:25:22.141 00:25:22.141 ' 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:22.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.141 --rc genhtml_branch_coverage=1 00:25:22.141 --rc genhtml_function_coverage=1 00:25:22.141 --rc genhtml_legend=1 00:25:22.141 --rc geninfo_all_blocks=1 00:25:22.141 --rc geninfo_unexecuted_blocks=1 00:25:22.141 00:25:22.141 ' 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:22.141 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:22.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:25:22.142 13:17:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:28.717 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:28.717 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:25:28.717 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:28.717 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:28.717 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:28.717 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:28.717 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:28.717 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:25:28.717 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:28.717 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:28.718 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:28.718 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:28.718 Found net devices under 0000:86:00.0: cvl_0_0 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:28.718 Found net devices under 0000:86:00.1: cvl_0_1 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:28.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:25:28.718 00:25:28.718 --- 10.0.0.2 ping statistics --- 00:25:28.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.718 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:28.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:25:28.718 00:25:28.718 --- 10.0.0.1 ping statistics --- 00:25:28.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.718 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.718 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:28.719 13:17:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:31.257 Waiting for block devices as requested 00:25:31.257 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:31.257 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:31.257 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:31.257 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:31.257 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:31.257 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:31.517 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:31.517 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:31.517 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:31.776 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:31.776 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:31.776 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:31.776 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:32.035 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:32.035 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:32.035 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:32.295 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:32.295 No valid GPT data, bailing 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:32.295 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:32.556 00:25:32.556 Discovery Log Number of Records 2, Generation counter 2 00:25:32.556 =====Discovery Log Entry 0====== 00:25:32.556 trtype: tcp 00:25:32.556 adrfam: ipv4 00:25:32.556 subtype: current discovery subsystem 00:25:32.556 treq: not specified, sq flow control disable supported 00:25:32.556 portid: 1 00:25:32.556 trsvcid: 4420 00:25:32.556 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:32.556 traddr: 10.0.0.1 00:25:32.556 eflags: none 00:25:32.556 sectype: none 00:25:32.556 =====Discovery Log Entry 1====== 00:25:32.556 trtype: tcp 00:25:32.556 adrfam: ipv4 00:25:32.556 subtype: nvme subsystem 00:25:32.556 treq: not specified, sq flow control disable supported 00:25:32.556 portid: 1 00:25:32.556 trsvcid: 4420 00:25:32.556 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:32.556 traddr: 10.0.0.1 00:25:32.556 eflags: none 00:25:32.556 sectype: none 00:25:32.556 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:32.556 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:32.556 ===================================================== 00:25:32.556 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:32.556 ===================================================== 00:25:32.556 Controller Capabilities/Features 00:25:32.556 ================================ 00:25:32.556 Vendor ID: 0000 00:25:32.556 Subsystem Vendor ID: 0000 00:25:32.556 Serial Number: 06a97bb06d833bd1a094 00:25:32.556 Model Number: Linux 00:25:32.556 Firmware Version: 6.8.9-20 00:25:32.556 Recommended Arb Burst: 0 00:25:32.556 IEEE OUI Identifier: 00 00 00 00:25:32.556 Multi-path I/O 00:25:32.556 May have multiple subsystem ports: No 00:25:32.556 May have multiple controllers: No 00:25:32.556 Associated with SR-IOV VF: No 00:25:32.556 Max Data Transfer Size: Unlimited 00:25:32.556 Max Number of Namespaces: 0 00:25:32.556 Max Number of I/O Queues: 1024 00:25:32.556 NVMe Specification Version (VS): 1.3 00:25:32.556 NVMe Specification Version (Identify): 1.3 00:25:32.556 Maximum Queue Entries: 1024 00:25:32.556 Contiguous Queues Required: No 00:25:32.556 Arbitration Mechanisms Supported 00:25:32.556 Weighted Round Robin: Not Supported 00:25:32.556 Vendor Specific: Not Supported 00:25:32.556 Reset Timeout: 7500 ms 00:25:32.556 Doorbell Stride: 4 bytes 00:25:32.556 NVM Subsystem Reset: Not Supported 00:25:32.556 Command Sets Supported 00:25:32.556 NVM Command Set: Supported 00:25:32.556 Boot Partition: Not Supported 00:25:32.556 Memory Page Size Minimum: 4096 bytes 00:25:32.556 Memory Page Size Maximum: 4096 bytes 00:25:32.556 Persistent Memory Region: Not Supported 00:25:32.556 Optional Asynchronous Events Supported 00:25:32.556 Namespace Attribute Notices: Not Supported 00:25:32.556 Firmware Activation Notices: Not Supported 00:25:32.556 ANA Change Notices: Not Supported 00:25:32.556 PLE Aggregate Log Change Notices: Not Supported 00:25:32.556 LBA Status Info Alert Notices: Not Supported 00:25:32.556 EGE Aggregate Log Change Notices: Not Supported 00:25:32.556 Normal NVM Subsystem Shutdown event: Not Supported 00:25:32.556 Zone Descriptor Change Notices: Not Supported 00:25:32.556 Discovery Log Change Notices: Supported 00:25:32.556 Controller Attributes 00:25:32.556 128-bit Host Identifier: Not Supported 00:25:32.556 Non-Operational Permissive Mode: Not Supported 00:25:32.556 NVM Sets: Not Supported 00:25:32.556 Read Recovery Levels: Not Supported 00:25:32.556 Endurance Groups: Not Supported 00:25:32.556 Predictable Latency Mode: Not Supported 00:25:32.556 Traffic Based Keep ALive: Not Supported 00:25:32.556 Namespace Granularity: Not Supported 00:25:32.556 SQ Associations: Not Supported 00:25:32.556 UUID List: Not Supported 00:25:32.556 Multi-Domain Subsystem: Not Supported 00:25:32.556 Fixed Capacity Management: Not Supported 00:25:32.556 Variable Capacity Management: Not Supported 00:25:32.556 Delete Endurance Group: Not Supported 00:25:32.556 Delete NVM Set: Not Supported 00:25:32.556 Extended LBA Formats Supported: Not Supported 00:25:32.556 Flexible Data Placement Supported: Not Supported 00:25:32.556 00:25:32.556 Controller Memory Buffer Support 00:25:32.556 ================================ 00:25:32.556 Supported: No 00:25:32.556 00:25:32.556 Persistent Memory Region Support 00:25:32.556 ================================ 00:25:32.556 Supported: No 00:25:32.556 00:25:32.556 Admin Command Set Attributes 00:25:32.556 ============================ 00:25:32.556 Security Send/Receive: Not Supported 00:25:32.556 Format NVM: Not Supported 00:25:32.556 Firmware Activate/Download: Not Supported 00:25:32.556 Namespace Management: Not Supported 00:25:32.556 Device Self-Test: Not Supported 00:25:32.556 Directives: Not Supported 00:25:32.556 NVMe-MI: Not Supported 00:25:32.556 Virtualization Management: Not Supported 00:25:32.556 Doorbell Buffer Config: Not Supported 00:25:32.556 Get LBA Status Capability: Not Supported 00:25:32.556 Command & Feature Lockdown Capability: Not Supported 00:25:32.556 Abort Command Limit: 1 00:25:32.556 Async Event Request Limit: 1 00:25:32.556 Number of Firmware Slots: N/A 00:25:32.556 Firmware Slot 1 Read-Only: N/A 00:25:32.556 Firmware Activation Without Reset: N/A 00:25:32.556 Multiple Update Detection Support: N/A 00:25:32.556 Firmware Update Granularity: No Information Provided 00:25:32.556 Per-Namespace SMART Log: No 00:25:32.556 Asymmetric Namespace Access Log Page: Not Supported 00:25:32.556 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:32.556 Command Effects Log Page: Not Supported 00:25:32.557 Get Log Page Extended Data: Supported 00:25:32.557 Telemetry Log Pages: Not Supported 00:25:32.557 Persistent Event Log Pages: Not Supported 00:25:32.557 Supported Log Pages Log Page: May Support 00:25:32.557 Commands Supported & Effects Log Page: Not Supported 00:25:32.557 Feature Identifiers & Effects Log Page:May Support 00:25:32.557 NVMe-MI Commands & Effects Log Page: May Support 00:25:32.557 Data Area 4 for Telemetry Log: Not Supported 00:25:32.557 Error Log Page Entries Supported: 1 00:25:32.557 Keep Alive: Not Supported 00:25:32.557 00:25:32.557 NVM Command Set Attributes 00:25:32.557 ========================== 00:25:32.557 Submission Queue Entry Size 00:25:32.557 Max: 1 00:25:32.557 Min: 1 00:25:32.557 Completion Queue Entry Size 00:25:32.557 Max: 1 00:25:32.557 Min: 1 00:25:32.557 Number of Namespaces: 0 00:25:32.557 Compare Command: Not Supported 00:25:32.557 Write Uncorrectable Command: Not Supported 00:25:32.557 Dataset Management Command: Not Supported 00:25:32.557 Write Zeroes Command: Not Supported 00:25:32.557 Set Features Save Field: Not Supported 00:25:32.557 Reservations: Not Supported 00:25:32.557 Timestamp: Not Supported 00:25:32.557 Copy: Not Supported 00:25:32.557 Volatile Write Cache: Not Present 00:25:32.557 Atomic Write Unit (Normal): 1 00:25:32.557 Atomic Write Unit (PFail): 1 00:25:32.557 Atomic Compare & Write Unit: 1 00:25:32.557 Fused Compare & Write: Not Supported 00:25:32.557 Scatter-Gather List 00:25:32.557 SGL Command Set: Supported 00:25:32.557 SGL Keyed: Not Supported 00:25:32.557 SGL Bit Bucket Descriptor: Not Supported 00:25:32.557 SGL Metadata Pointer: Not Supported 00:25:32.557 Oversized SGL: Not Supported 00:25:32.557 SGL Metadata Address: Not Supported 00:25:32.557 SGL Offset: Supported 00:25:32.557 Transport SGL Data Block: Not Supported 00:25:32.557 Replay Protected Memory Block: Not Supported 00:25:32.557 00:25:32.557 Firmware Slot Information 00:25:32.557 ========================= 00:25:32.557 Active slot: 0 00:25:32.557 00:25:32.557 00:25:32.557 Error Log 00:25:32.557 ========= 00:25:32.557 00:25:32.557 Active Namespaces 00:25:32.557 ================= 00:25:32.557 Discovery Log Page 00:25:32.557 ================== 00:25:32.557 Generation Counter: 2 00:25:32.557 Number of Records: 2 00:25:32.557 Record Format: 0 00:25:32.557 00:25:32.557 Discovery Log Entry 0 00:25:32.557 ---------------------- 00:25:32.557 Transport Type: 3 (TCP) 00:25:32.557 Address Family: 1 (IPv4) 00:25:32.557 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:32.557 Entry Flags: 00:25:32.557 Duplicate Returned Information: 0 00:25:32.557 Explicit Persistent Connection Support for Discovery: 0 00:25:32.557 Transport Requirements: 00:25:32.557 Secure Channel: Not Specified 00:25:32.557 Port ID: 1 (0x0001) 00:25:32.557 Controller ID: 65535 (0xffff) 00:25:32.557 Admin Max SQ Size: 32 00:25:32.557 Transport Service Identifier: 4420 00:25:32.557 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:32.557 Transport Address: 10.0.0.1 00:25:32.557 Discovery Log Entry 1 00:25:32.557 ---------------------- 00:25:32.557 Transport Type: 3 (TCP) 00:25:32.557 Address Family: 1 (IPv4) 00:25:32.557 Subsystem Type: 2 (NVM Subsystem) 00:25:32.557 Entry Flags: 00:25:32.557 Duplicate Returned Information: 0 00:25:32.557 Explicit Persistent Connection Support for Discovery: 0 00:25:32.557 Transport Requirements: 00:25:32.557 Secure Channel: Not Specified 00:25:32.557 Port ID: 1 (0x0001) 00:25:32.557 Controller ID: 65535 (0xffff) 00:25:32.557 Admin Max SQ Size: 32 00:25:32.557 Transport Service Identifier: 4420 00:25:32.557 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:32.557 Transport Address: 10.0.0.1 00:25:32.557 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:32.557 get_feature(0x01) failed 00:25:32.557 get_feature(0x02) failed 00:25:32.557 get_feature(0x04) failed 00:25:32.557 ===================================================== 00:25:32.557 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:32.557 ===================================================== 00:25:32.557 Controller Capabilities/Features 00:25:32.557 ================================ 00:25:32.557 Vendor ID: 0000 00:25:32.557 Subsystem Vendor ID: 0000 00:25:32.557 Serial Number: 5cc197ec1818fe001228 00:25:32.557 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:32.557 Firmware Version: 6.8.9-20 00:25:32.557 Recommended Arb Burst: 6 00:25:32.557 IEEE OUI Identifier: 00 00 00 00:25:32.557 Multi-path I/O 00:25:32.557 May have multiple subsystem ports: Yes 00:25:32.557 May have multiple controllers: Yes 00:25:32.557 Associated with SR-IOV VF: No 00:25:32.557 Max Data Transfer Size: Unlimited 00:25:32.557 Max Number of Namespaces: 1024 00:25:32.557 Max Number of I/O Queues: 128 00:25:32.557 NVMe Specification Version (VS): 1.3 00:25:32.557 NVMe Specification Version (Identify): 1.3 00:25:32.557 Maximum Queue Entries: 1024 00:25:32.557 Contiguous Queues Required: No 00:25:32.557 Arbitration Mechanisms Supported 00:25:32.557 Weighted Round Robin: Not Supported 00:25:32.557 Vendor Specific: Not Supported 00:25:32.557 Reset Timeout: 7500 ms 00:25:32.557 Doorbell Stride: 4 bytes 00:25:32.557 NVM Subsystem Reset: Not Supported 00:25:32.557 Command Sets Supported 00:25:32.557 NVM Command Set: Supported 00:25:32.557 Boot Partition: Not Supported 00:25:32.557 Memory Page Size Minimum: 4096 bytes 00:25:32.557 Memory Page Size Maximum: 4096 bytes 00:25:32.557 Persistent Memory Region: Not Supported 00:25:32.557 Optional Asynchronous Events Supported 00:25:32.557 Namespace Attribute Notices: Supported 00:25:32.557 Firmware Activation Notices: Not Supported 00:25:32.557 ANA Change Notices: Supported 00:25:32.557 PLE Aggregate Log Change Notices: Not Supported 00:25:32.557 LBA Status Info Alert Notices: Not Supported 00:25:32.557 EGE Aggregate Log Change Notices: Not Supported 00:25:32.557 Normal NVM Subsystem Shutdown event: Not Supported 00:25:32.557 Zone Descriptor Change Notices: Not Supported 00:25:32.557 Discovery Log Change Notices: Not Supported 00:25:32.557 Controller Attributes 00:25:32.557 128-bit Host Identifier: Supported 00:25:32.557 Non-Operational Permissive Mode: Not Supported 00:25:32.557 NVM Sets: Not Supported 00:25:32.557 Read Recovery Levels: Not Supported 00:25:32.557 Endurance Groups: Not Supported 00:25:32.557 Predictable Latency Mode: Not Supported 00:25:32.557 Traffic Based Keep ALive: Supported 00:25:32.557 Namespace Granularity: Not Supported 00:25:32.557 SQ Associations: Not Supported 00:25:32.557 UUID List: Not Supported 00:25:32.557 Multi-Domain Subsystem: Not Supported 00:25:32.557 Fixed Capacity Management: Not Supported 00:25:32.557 Variable Capacity Management: Not Supported 00:25:32.557 Delete Endurance Group: Not Supported 00:25:32.557 Delete NVM Set: Not Supported 00:25:32.557 Extended LBA Formats Supported: Not Supported 00:25:32.557 Flexible Data Placement Supported: Not Supported 00:25:32.557 00:25:32.557 Controller Memory Buffer Support 00:25:32.557 ================================ 00:25:32.557 Supported: No 00:25:32.557 00:25:32.557 Persistent Memory Region Support 00:25:32.557 ================================ 00:25:32.557 Supported: No 00:25:32.557 00:25:32.557 Admin Command Set Attributes 00:25:32.557 ============================ 00:25:32.557 Security Send/Receive: Not Supported 00:25:32.557 Format NVM: Not Supported 00:25:32.557 Firmware Activate/Download: Not Supported 00:25:32.557 Namespace Management: Not Supported 00:25:32.557 Device Self-Test: Not Supported 00:25:32.557 Directives: Not Supported 00:25:32.557 NVMe-MI: Not Supported 00:25:32.557 Virtualization Management: Not Supported 00:25:32.557 Doorbell Buffer Config: Not Supported 00:25:32.557 Get LBA Status Capability: Not Supported 00:25:32.557 Command & Feature Lockdown Capability: Not Supported 00:25:32.557 Abort Command Limit: 4 00:25:32.557 Async Event Request Limit: 4 00:25:32.557 Number of Firmware Slots: N/A 00:25:32.557 Firmware Slot 1 Read-Only: N/A 00:25:32.557 Firmware Activation Without Reset: N/A 00:25:32.557 Multiple Update Detection Support: N/A 00:25:32.557 Firmware Update Granularity: No Information Provided 00:25:32.557 Per-Namespace SMART Log: Yes 00:25:32.557 Asymmetric Namespace Access Log Page: Supported 00:25:32.557 ANA Transition Time : 10 sec 00:25:32.557 00:25:32.557 Asymmetric Namespace Access Capabilities 00:25:32.557 ANA Optimized State : Supported 00:25:32.557 ANA Non-Optimized State : Supported 00:25:32.558 ANA Inaccessible State : Supported 00:25:32.558 ANA Persistent Loss State : Supported 00:25:32.558 ANA Change State : Supported 00:25:32.558 ANAGRPID is not changed : No 00:25:32.558 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:32.558 00:25:32.558 ANA Group Identifier Maximum : 128 00:25:32.558 Number of ANA Group Identifiers : 128 00:25:32.558 Max Number of Allowed Namespaces : 1024 00:25:32.558 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:32.558 Command Effects Log Page: Supported 00:25:32.558 Get Log Page Extended Data: Supported 00:25:32.558 Telemetry Log Pages: Not Supported 00:25:32.558 Persistent Event Log Pages: Not Supported 00:25:32.558 Supported Log Pages Log Page: May Support 00:25:32.558 Commands Supported & Effects Log Page: Not Supported 00:25:32.558 Feature Identifiers & Effects Log Page:May Support 00:25:32.558 NVMe-MI Commands & Effects Log Page: May Support 00:25:32.558 Data Area 4 for Telemetry Log: Not Supported 00:25:32.558 Error Log Page Entries Supported: 128 00:25:32.558 Keep Alive: Supported 00:25:32.558 Keep Alive Granularity: 1000 ms 00:25:32.558 00:25:32.558 NVM Command Set Attributes 00:25:32.558 ========================== 00:25:32.558 Submission Queue Entry Size 00:25:32.558 Max: 64 00:25:32.558 Min: 64 00:25:32.558 Completion Queue Entry Size 00:25:32.558 Max: 16 00:25:32.558 Min: 16 00:25:32.558 Number of Namespaces: 1024 00:25:32.558 Compare Command: Not Supported 00:25:32.558 Write Uncorrectable Command: Not Supported 00:25:32.558 Dataset Management Command: Supported 00:25:32.558 Write Zeroes Command: Supported 00:25:32.558 Set Features Save Field: Not Supported 00:25:32.558 Reservations: Not Supported 00:25:32.558 Timestamp: Not Supported 00:25:32.558 Copy: Not Supported 00:25:32.558 Volatile Write Cache: Present 00:25:32.558 Atomic Write Unit (Normal): 1 00:25:32.558 Atomic Write Unit (PFail): 1 00:25:32.558 Atomic Compare & Write Unit: 1 00:25:32.558 Fused Compare & Write: Not Supported 00:25:32.558 Scatter-Gather List 00:25:32.558 SGL Command Set: Supported 00:25:32.558 SGL Keyed: Not Supported 00:25:32.558 SGL Bit Bucket Descriptor: Not Supported 00:25:32.558 SGL Metadata Pointer: Not Supported 00:25:32.558 Oversized SGL: Not Supported 00:25:32.558 SGL Metadata Address: Not Supported 00:25:32.558 SGL Offset: Supported 00:25:32.558 Transport SGL Data Block: Not Supported 00:25:32.558 Replay Protected Memory Block: Not Supported 00:25:32.558 00:25:32.558 Firmware Slot Information 00:25:32.558 ========================= 00:25:32.558 Active slot: 0 00:25:32.558 00:25:32.558 Asymmetric Namespace Access 00:25:32.558 =========================== 00:25:32.558 Change Count : 0 00:25:32.558 Number of ANA Group Descriptors : 1 00:25:32.558 ANA Group Descriptor : 0 00:25:32.558 ANA Group ID : 1 00:25:32.558 Number of NSID Values : 1 00:25:32.558 Change Count : 0 00:25:32.558 ANA State : 1 00:25:32.558 Namespace Identifier : 1 00:25:32.558 00:25:32.558 Commands Supported and Effects 00:25:32.558 ============================== 00:25:32.558 Admin Commands 00:25:32.558 -------------- 00:25:32.558 Get Log Page (02h): Supported 00:25:32.558 Identify (06h): Supported 00:25:32.558 Abort (08h): Supported 00:25:32.558 Set Features (09h): Supported 00:25:32.558 Get Features (0Ah): Supported 00:25:32.558 Asynchronous Event Request (0Ch): Supported 00:25:32.558 Keep Alive (18h): Supported 00:25:32.558 I/O Commands 00:25:32.558 ------------ 00:25:32.558 Flush (00h): Supported 00:25:32.558 Write (01h): Supported LBA-Change 00:25:32.558 Read (02h): Supported 00:25:32.558 Write Zeroes (08h): Supported LBA-Change 00:25:32.558 Dataset Management (09h): Supported 00:25:32.558 00:25:32.558 Error Log 00:25:32.558 ========= 00:25:32.558 Entry: 0 00:25:32.558 Error Count: 0x3 00:25:32.558 Submission Queue Id: 0x0 00:25:32.558 Command Id: 0x5 00:25:32.558 Phase Bit: 0 00:25:32.558 Status Code: 0x2 00:25:32.558 Status Code Type: 0x0 00:25:32.558 Do Not Retry: 1 00:25:32.558 Error Location: 0x28 00:25:32.558 LBA: 0x0 00:25:32.558 Namespace: 0x0 00:25:32.558 Vendor Log Page: 0x0 00:25:32.558 ----------- 00:25:32.558 Entry: 1 00:25:32.558 Error Count: 0x2 00:25:32.558 Submission Queue Id: 0x0 00:25:32.558 Command Id: 0x5 00:25:32.558 Phase Bit: 0 00:25:32.558 Status Code: 0x2 00:25:32.558 Status Code Type: 0x0 00:25:32.558 Do Not Retry: 1 00:25:32.558 Error Location: 0x28 00:25:32.558 LBA: 0x0 00:25:32.558 Namespace: 0x0 00:25:32.558 Vendor Log Page: 0x0 00:25:32.558 ----------- 00:25:32.558 Entry: 2 00:25:32.558 Error Count: 0x1 00:25:32.558 Submission Queue Id: 0x0 00:25:32.558 Command Id: 0x4 00:25:32.558 Phase Bit: 0 00:25:32.558 Status Code: 0x2 00:25:32.558 Status Code Type: 0x0 00:25:32.558 Do Not Retry: 1 00:25:32.558 Error Location: 0x28 00:25:32.558 LBA: 0x0 00:25:32.558 Namespace: 0x0 00:25:32.558 Vendor Log Page: 0x0 00:25:32.558 00:25:32.558 Number of Queues 00:25:32.558 ================ 00:25:32.558 Number of I/O Submission Queues: 128 00:25:32.558 Number of I/O Completion Queues: 128 00:25:32.558 00:25:32.558 ZNS Specific Controller Data 00:25:32.558 ============================ 00:25:32.558 Zone Append Size Limit: 0 00:25:32.558 00:25:32.558 00:25:32.558 Active Namespaces 00:25:32.558 ================= 00:25:32.558 get_feature(0x05) failed 00:25:32.558 Namespace ID:1 00:25:32.558 Command Set Identifier: NVM (00h) 00:25:32.558 Deallocate: Supported 00:25:32.558 Deallocated/Unwritten Error: Not Supported 00:25:32.558 Deallocated Read Value: Unknown 00:25:32.558 Deallocate in Write Zeroes: Not Supported 00:25:32.558 Deallocated Guard Field: 0xFFFF 00:25:32.558 Flush: Supported 00:25:32.558 Reservation: Not Supported 00:25:32.558 Namespace Sharing Capabilities: Multiple Controllers 00:25:32.558 Size (in LBAs): 1953525168 (931GiB) 00:25:32.558 Capacity (in LBAs): 1953525168 (931GiB) 00:25:32.558 Utilization (in LBAs): 1953525168 (931GiB) 00:25:32.558 UUID: 4e25fa40-9d91-4944-a343-0efc6d0fdefb 00:25:32.558 Thin Provisioning: Not Supported 00:25:32.558 Per-NS Atomic Units: Yes 00:25:32.558 Atomic Boundary Size (Normal): 0 00:25:32.558 Atomic Boundary Size (PFail): 0 00:25:32.558 Atomic Boundary Offset: 0 00:25:32.558 NGUID/EUI64 Never Reused: No 00:25:32.558 ANA group ID: 1 00:25:32.558 Namespace Write Protected: No 00:25:32.558 Number of LBA Formats: 1 00:25:32.558 Current LBA Format: LBA Format #00 00:25:32.558 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:32.558 00:25:32.558 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:32.558 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:32.558 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:32.558 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:32.558 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:32.558 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:32.558 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:32.558 rmmod nvme_tcp 00:25:32.558 rmmod nvme_fabrics 00:25:32.558 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:32.818 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:32.818 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:32.818 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:32.818 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:32.818 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:32.818 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:32.818 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:32.818 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:32.818 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:32.818 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:32.818 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:32.818 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:32.818 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.818 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:32.818 13:17:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.725 13:17:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:34.725 13:17:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:34.725 13:17:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:34.725 13:17:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:34.725 13:17:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:34.725 13:17:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:34.725 13:17:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:34.725 13:17:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:34.725 13:17:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:34.725 13:17:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:34.725 13:17:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:38.015 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:38.015 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:38.015 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:38.015 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:38.015 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:38.015 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:38.015 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:38.015 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:38.015 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:38.015 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:38.015 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:38.015 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:38.015 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:38.015 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:38.015 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:38.015 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:38.584 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:38.844 00:25:38.844 real 0m16.703s 00:25:38.844 user 0m4.384s 00:25:38.844 sys 0m8.700s 00:25:38.844 13:17:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:38.844 13:17:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:38.844 ************************************ 00:25:38.844 END TEST nvmf_identify_kernel_target 00:25:38.844 ************************************ 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.844 ************************************ 00:25:38.844 START TEST nvmf_auth_host 00:25:38.844 ************************************ 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:38.844 * Looking for test storage... 00:25:38.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:38.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.844 --rc genhtml_branch_coverage=1 00:25:38.844 --rc genhtml_function_coverage=1 00:25:38.844 --rc genhtml_legend=1 00:25:38.844 --rc geninfo_all_blocks=1 00:25:38.844 --rc geninfo_unexecuted_blocks=1 00:25:38.844 00:25:38.844 ' 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:38.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.844 --rc genhtml_branch_coverage=1 00:25:38.844 --rc genhtml_function_coverage=1 00:25:38.844 --rc genhtml_legend=1 00:25:38.844 --rc geninfo_all_blocks=1 00:25:38.844 --rc geninfo_unexecuted_blocks=1 00:25:38.844 00:25:38.844 ' 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:38.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.844 --rc genhtml_branch_coverage=1 00:25:38.844 --rc genhtml_function_coverage=1 00:25:38.844 --rc genhtml_legend=1 00:25:38.844 --rc geninfo_all_blocks=1 00:25:38.844 --rc geninfo_unexecuted_blocks=1 00:25:38.844 00:25:38.844 ' 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:38.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.844 --rc genhtml_branch_coverage=1 00:25:38.844 --rc genhtml_function_coverage=1 00:25:38.844 --rc genhtml_legend=1 00:25:38.844 --rc geninfo_all_blocks=1 00:25:38.844 --rc geninfo_unexecuted_blocks=1 00:25:38.844 00:25:38.844 ' 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:38.844 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:39.104 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:39.104 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:39.104 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:39.104 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:39.104 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:39.104 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:39.104 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:39.104 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:39.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:39.105 13:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:45.679 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:45.679 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:45.679 Found net devices under 0000:86:00.0: cvl_0_0 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.679 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:45.680 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:45.680 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.680 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:45.680 Found net devices under 0000:86:00.1: cvl_0_1 00:25:45.680 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.680 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:45.680 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:45.680 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:45.680 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:45.680 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:45.680 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:45.680 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.680 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.680 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:45.680 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:45.680 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:45.680 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:45.680 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:45.680 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:45.680 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:45.680 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.680 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:45.680 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:45.680 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:45.680 13:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:45.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:25:45.680 00:25:45.680 --- 10.0.0.2 ping statistics --- 00:25:45.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.680 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:45.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:25:45.680 00:25:45.680 --- 10.0.0.1 ping statistics --- 00:25:45.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.680 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2975202 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2975202 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2975202 ']' 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:45.680 13:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.941 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:45.941 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:45.941 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:45.941 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:45.941 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.941 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:45.941 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:45.941 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:45.941 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9a049f94080bc4ad6f03d0f0b0a72724 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.zKl 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9a049f94080bc4ad6f03d0f0b0a72724 0 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9a049f94080bc4ad6f03d0f0b0a72724 0 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9a049f94080bc4ad6f03d0f0b0a72724 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.zKl 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.zKl 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.zKl 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=090b8c8b4fecf820601d8b4531dcaaf73bd5dcc4a0d2332f0e0a1632ef31ba66 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.gIG 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 090b8c8b4fecf820601d8b4531dcaaf73bd5dcc4a0d2332f0e0a1632ef31ba66 3 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 090b8c8b4fecf820601d8b4531dcaaf73bd5dcc4a0d2332f0e0a1632ef31ba66 3 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=090b8c8b4fecf820601d8b4531dcaaf73bd5dcc4a0d2332f0e0a1632ef31ba66 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.gIG 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.gIG 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.gIG 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3aa8a982608bf1607b6ebb22d02872e2672a950e862a7380 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.j47 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3aa8a982608bf1607b6ebb22d02872e2672a950e862a7380 0 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3aa8a982608bf1607b6ebb22d02872e2672a950e862a7380 0 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3aa8a982608bf1607b6ebb22d02872e2672a950e862a7380 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.j47 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.j47 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.j47 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f80d59600c73ba5ce57a49e190bb25187fd416511b192df8 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.VT5 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f80d59600c73ba5ce57a49e190bb25187fd416511b192df8 2 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f80d59600c73ba5ce57a49e190bb25187fd416511b192df8 2 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f80d59600c73ba5ce57a49e190bb25187fd416511b192df8 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:45.942 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:46.200 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.VT5 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.VT5 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.VT5 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d6c82ba893bad4b033c40c49a4e52464 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Nzb 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d6c82ba893bad4b033c40c49a4e52464 1 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d6c82ba893bad4b033c40c49a4e52464 1 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d6c82ba893bad4b033c40c49a4e52464 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Nzb 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Nzb 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Nzb 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3fa9abd841bab2bdf870e728c3b3c916 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.JET 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3fa9abd841bab2bdf870e728c3b3c916 1 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3fa9abd841bab2bdf870e728c3b3c916 1 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3fa9abd841bab2bdf870e728c3b3c916 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.JET 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.JET 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.JET 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=11099b95ece18dfa0612199dece5080da24927f7d8a70fb4 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Lgq 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 11099b95ece18dfa0612199dece5080da24927f7d8a70fb4 2 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 11099b95ece18dfa0612199dece5080da24927f7d8a70fb4 2 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=11099b95ece18dfa0612199dece5080da24927f7d8a70fb4 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Lgq 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Lgq 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Lgq 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=243e871855dd90c2777df814ed99f07a 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.EUw 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 243e871855dd90c2777df814ed99f07a 0 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 243e871855dd90c2777df814ed99f07a 0 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=243e871855dd90c2777df814ed99f07a 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:46.201 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.EUw 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.EUw 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.EUw 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d923f30b4624ed64d22e0fd00be59d85f6aa93b1d7724bd105881e4d6a53f6b8 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.oba 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d923f30b4624ed64d22e0fd00be59d85f6aa93b1d7724bd105881e4d6a53f6b8 3 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d923f30b4624ed64d22e0fd00be59d85f6aa93b1d7724bd105881e4d6a53f6b8 3 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d923f30b4624ed64d22e0fd00be59d85f6aa93b1d7724bd105881e4d6a53f6b8 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.oba 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.oba 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.oba 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2975202 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2975202 ']' 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.461 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:46.462 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zKl 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.gIG ]] 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gIG 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.j47 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.VT5 ]] 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VT5 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Nzb 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.JET ]] 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JET 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Lgq 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.EUw ]] 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.EUw 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.oba 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.721 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:46.722 13:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:49.257 Waiting for block devices as requested 00:25:49.257 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:49.517 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:49.517 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:49.776 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:49.776 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:49.776 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:49.776 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:50.035 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:50.035 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:50.035 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:50.035 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:50.294 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:50.294 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:50.294 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:50.553 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:50.553 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:50.553 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:51.121 No valid GPT data, bailing 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:51.121 00:25:51.121 Discovery Log Number of Records 2, Generation counter 2 00:25:51.121 =====Discovery Log Entry 0====== 00:25:51.121 trtype: tcp 00:25:51.121 adrfam: ipv4 00:25:51.121 subtype: current discovery subsystem 00:25:51.121 treq: not specified, sq flow control disable supported 00:25:51.121 portid: 1 00:25:51.121 trsvcid: 4420 00:25:51.121 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:51.121 traddr: 10.0.0.1 00:25:51.121 eflags: none 00:25:51.121 sectype: none 00:25:51.121 =====Discovery Log Entry 1====== 00:25:51.121 trtype: tcp 00:25:51.121 adrfam: ipv4 00:25:51.121 subtype: nvme subsystem 00:25:51.121 treq: not specified, sq flow control disable supported 00:25:51.121 portid: 1 00:25:51.121 trsvcid: 4420 00:25:51.121 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:51.121 traddr: 10.0.0.1 00:25:51.121 eflags: none 00:25:51.121 sectype: none 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:51.121 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: ]] 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.381 nvme0n1 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.381 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: ]] 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.382 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.641 nvme0n1 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: ]] 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.641 13:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.900 nvme0n1 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: ]] 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.900 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.158 nvme0n1 00:25:52.158 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.158 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.158 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.158 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.158 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: ]] 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.159 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.418 nvme0n1 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.418 nvme0n1 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.418 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: ]] 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.677 13:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.677 nvme0n1 00:25:52.677 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.677 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.677 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.677 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.677 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.677 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: ]] 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.936 nvme0n1 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.936 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.195 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.195 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.195 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.195 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.195 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.195 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.195 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:53.195 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.195 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.195 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:53.195 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:53.195 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:25:53.195 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:25:53.195 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.195 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:53.195 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:25:53.195 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: ]] 00:25:53.195 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:25:53.195 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:53.195 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.195 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.195 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:53.195 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.196 nvme0n1 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.196 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.454 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.454 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.454 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.454 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.454 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.454 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.454 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:53.454 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.454 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.454 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:53.454 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:53.454 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:25:53.454 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:25:53.454 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.454 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:53.454 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:25:53.454 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: ]] 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.455 nvme0n1 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.455 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.713 13:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.713 nvme0n1 00:25:53.713 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.713 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.713 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.713 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.713 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.713 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.972 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.972 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.972 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.972 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.972 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.972 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:53.972 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.972 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:53.972 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.972 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.972 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:53.972 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: ]] 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.973 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.231 nvme0n1 00:25:54.231 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.231 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.231 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.231 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.231 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.231 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.231 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.231 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: ]] 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.232 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.490 nvme0n1 00:25:54.490 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.490 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.490 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.490 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.490 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.490 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.490 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.490 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.490 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.490 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.490 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.490 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.490 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:54.490 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.490 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.490 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:54.490 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:54.490 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: ]] 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.491 13:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.750 nvme0n1 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: ]] 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.750 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.009 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.009 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.009 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.009 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.009 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.009 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.009 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.009 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.009 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:55.009 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.009 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.009 nvme0n1 00:25:55.009 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.009 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.009 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.009 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.009 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.269 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.528 nvme0n1 00:25:55.528 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.528 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.528 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.528 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.528 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.528 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.528 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.528 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.528 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.528 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.528 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.528 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:55.528 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.528 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:55.528 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.528 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:55.528 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:55.528 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:55.528 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:25:55.528 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:25:55.528 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: ]] 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.529 13:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.097 nvme0n1 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: ]] 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.097 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.356 nvme0n1 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: ]] 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.356 13:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.924 nvme0n1 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: ]] 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.924 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.183 nvme0n1 00:25:57.183 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.183 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.183 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.183 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.183 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.442 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.442 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.442 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.442 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.443 13:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.702 nvme0n1 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: ]] 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.702 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.961 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.961 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.961 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.961 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.961 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.961 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.961 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.961 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.961 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.961 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.961 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.961 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.961 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:57.961 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.961 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.529 nvme0n1 00:25:58.529 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.529 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.529 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.529 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.529 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.529 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.529 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.529 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.529 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.529 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.529 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.529 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.529 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: ]] 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.530 13:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.099 nvme0n1 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: ]] 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.099 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.668 nvme0n1 00:25:59.668 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.668 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.668 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.668 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.668 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.668 13:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: ]] 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.668 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.928 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.928 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.928 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.928 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.928 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.928 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.928 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.928 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.928 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.928 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.928 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.928 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.928 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:59.928 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.928 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.496 nvme0n1 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.496 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:00.497 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:00.497 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.497 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:00.497 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.497 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.497 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.497 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.497 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.497 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.497 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.497 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.497 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.497 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.497 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.497 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.497 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.497 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.497 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:00.497 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.497 13:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.065 nvme0n1 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: ]] 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.065 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.325 nvme0n1 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: ]] 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.325 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.585 nvme0n1 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: ]] 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.585 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.845 nvme0n1 00:26:01.845 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.845 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.845 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.845 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.845 13:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: ]] 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.845 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.845 nvme0n1 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.105 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.106 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.106 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.106 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.106 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.106 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.106 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.106 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.106 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:02.106 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.106 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.106 nvme0n1 00:26:02.106 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.106 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.106 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.106 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.106 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.106 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: ]] 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.365 nvme0n1 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.365 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: ]] 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.624 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.625 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.625 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.625 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.625 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.625 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:02.625 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.625 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.625 nvme0n1 00:26:02.625 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.625 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.625 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.625 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.625 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.625 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.625 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.625 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.625 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.625 13:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: ]] 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.884 nvme0n1 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.884 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.144 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: ]] 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.145 nvme0n1 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.145 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.405 nvme0n1 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: ]] 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.405 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.728 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.728 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.728 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.728 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.728 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.728 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.728 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.728 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.728 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.728 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.728 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.728 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.728 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:03.728 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.728 13:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.728 nvme0n1 00:26:03.728 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.728 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.728 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.728 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.728 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.728 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.728 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.728 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.728 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.728 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: ]] 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.063 nvme0n1 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:26:04.063 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: ]] 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.064 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.322 nvme0n1 00:26:04.322 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.322 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.322 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.322 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.322 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.322 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.581 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.581 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.581 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.581 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.581 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.581 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.581 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:04.581 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.581 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:04.581 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:04.581 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:04.581 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:26:04.581 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:26:04.581 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:04.581 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:04.581 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: ]] 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.582 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.841 nvme0n1 00:26:04.841 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.841 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.841 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.841 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.841 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.841 13:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:04.841 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.842 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.842 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.842 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.842 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.842 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.842 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.842 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.842 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.842 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.842 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.842 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.842 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.842 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.842 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:04.842 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.842 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.101 nvme0n1 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: ]] 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.101 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.102 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.102 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.102 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.102 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.102 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.102 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.102 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.102 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.102 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.102 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.102 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.102 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.102 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:05.102 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.102 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.670 nvme0n1 00:26:05.670 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.670 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: ]] 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.671 13:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.947 nvme0n1 00:26:05.947 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.947 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.947 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.947 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.947 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.947 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.947 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.947 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.947 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.947 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.947 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.947 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.947 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:05.947 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.947 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:05.947 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:05.947 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:05.947 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:26:05.947 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: ]] 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.948 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.519 nvme0n1 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: ]] 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.520 13:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.779 nvme0n1 00:26:06.779 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.779 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.779 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.779 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.779 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.779 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.040 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.299 nvme0n1 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: ]] 00:26:07.299 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:26:07.300 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:07.300 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.300 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:07.300 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:07.300 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:07.300 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.300 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:07.300 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.300 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.300 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.300 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.300 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.300 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.300 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.300 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.300 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.300 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.300 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.300 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.300 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.300 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.300 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:07.300 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.300 13:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.237 nvme0n1 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: ]] 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.238 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.806 nvme0n1 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: ]] 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:08.806 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.807 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:08.807 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.807 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.807 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.807 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.807 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.807 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.807 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.807 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.807 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.807 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.807 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.807 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.807 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.807 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.807 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:08.807 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.807 13:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.375 nvme0n1 00:26:09.375 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.375 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.375 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.375 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.375 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.375 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.375 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.375 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.375 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.375 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.375 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.375 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.375 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:09.375 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.375 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.375 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:09.375 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: ]] 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.376 13:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.945 nvme0n1 00:26:09.945 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.945 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.946 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.514 nvme0n1 00:26:10.514 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.514 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.514 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.514 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.514 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: ]] 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.773 13:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.773 nvme0n1 00:26:10.773 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.773 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.773 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.773 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.773 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.773 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: ]] 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.033 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.034 nvme0n1 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: ]] 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:11.034 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.293 nvme0n1 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.293 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: ]] 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.294 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.554 nvme0n1 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.554 13:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.814 nvme0n1 00:26:11.814 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: ]] 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.815 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.074 nvme0n1 00:26:12.074 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.074 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.074 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: ]] 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.075 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.334 nvme0n1 00:26:12.334 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.334 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.334 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.334 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.334 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.334 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.334 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.334 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.334 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.334 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.334 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.334 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.334 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:12.334 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.334 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:12.334 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:12.334 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:12.334 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:26:12.334 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:26:12.334 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:12.334 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:12.334 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:26:12.334 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: ]] 00:26:12.334 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:26:12.335 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:12.335 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.335 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:12.335 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:12.335 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:12.335 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.335 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:12.335 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.335 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.335 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.335 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.335 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.335 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.335 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.335 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.335 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.335 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.335 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.335 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.335 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.335 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.335 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.335 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.335 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.593 nvme0n1 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: ]] 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.593 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.594 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.594 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.594 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.594 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.594 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.594 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.594 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.594 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.594 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.594 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.594 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.594 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.594 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:12.594 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.594 13:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.852 nvme0n1 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.852 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.111 nvme0n1 00:26:13.111 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.111 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.111 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.111 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.111 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.111 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.111 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.111 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.111 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.111 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.111 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.111 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:13.111 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.111 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:13.111 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.111 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:13.111 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:13.111 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:13.111 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: ]] 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.112 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.371 nvme0n1 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: ]] 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.371 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.630 nvme0n1 00:26:13.630 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.630 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.630 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.630 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.630 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.630 13:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.630 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: ]] 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.890 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.149 nvme0n1 00:26:14.149 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.149 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.149 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.149 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.149 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.149 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.149 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.149 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: ]] 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.150 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.409 nvme0n1 00:26:14.409 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.409 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.409 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.409 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.409 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.409 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.409 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.409 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.409 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.409 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.409 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.409 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.409 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:14.409 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.409 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:14.409 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:14.409 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:14.409 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:26:14.409 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:14.409 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.409 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.410 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.668 nvme0n1 00:26:14.668 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.668 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.668 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.668 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.668 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.668 13:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.668 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.668 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.668 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.668 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.668 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.668 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:14.669 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.669 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:14.669 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.669 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:14.669 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:14.669 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:14.669 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:26:14.669 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:26:14.669 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:14.669 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:14.669 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:26:14.669 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: ]] 00:26:14.669 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:26:14.669 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:14.669 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.669 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:14.669 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:14.669 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:14.669 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.669 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:14.669 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.669 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.669 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.669 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.928 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.928 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.928 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.928 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.928 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.928 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.928 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.928 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.928 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.928 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.928 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:14.928 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.928 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.188 nvme0n1 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: ]] 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.188 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.757 nvme0n1 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: ]] 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:15.757 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:15.758 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:15.758 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.758 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:15.758 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.758 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.758 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.758 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.758 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.758 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.758 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.758 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.758 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.758 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.758 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.758 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.758 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.758 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.758 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:15.758 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.758 13:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.017 nvme0n1 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: ]] 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:16.017 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:16.018 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.018 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:16.018 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.018 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.275 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.275 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.275 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.275 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.275 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.275 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.275 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.275 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.275 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.275 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.276 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.276 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.276 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:16.276 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.276 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.534 nvme0n1 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:16.534 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.535 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:16.535 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.535 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.535 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.535 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.535 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.535 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.535 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.535 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.535 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.535 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.535 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.535 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.535 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.535 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.535 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:16.535 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.535 13:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.104 nvme0n1 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwNDlmOTQwODBiYzRhZDZmMDNkMGYwYjBhNzI3MjQmK4m0: 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: ]] 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDkwYjhjOGI0ZmVjZjgyMDYwMWQ4YjQ1MzFkY2FhZjczYmQ1ZGNjNGEwZDIzMzJmMGUwYTE2MzJlZjMxYmE2NmkChc4=: 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.104 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.673 nvme0n1 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: ]] 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.673 13:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.241 nvme0n1 00:26:18.241 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.241 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.241 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.241 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.241 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.241 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.241 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.241 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.241 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.242 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: ]] 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.501 13:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.070 nvme0n1 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTEwOTliOTVlY2UxOGRmYTA2MTIxOTlkZWNlNTA4MGRhMjQ5MjdmN2Q4YTcwZmI0vVOT1g==: 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: ]] 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjQzZTg3MTg1NWRkOTBjMjc3N2RmODE0ZWQ5OWYwN2FxlxDI: 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:19.070 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.071 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:19.071 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.071 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.071 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.071 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.071 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.071 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.071 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.071 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.071 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.071 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.071 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.071 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.071 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.071 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.071 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:19.071 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.071 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.640 nvme0n1 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDkyM2YzMGI0NjI0ZWQ2NGQyMmUwZmQwMGJlNTlkODVmNmFhOTNiMWQ3NzI0YmQxMDU4ODFlNGQ2YTUzZjZiOK0nRjI=: 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.640 13:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.208 nvme0n1 00:26:20.208 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.208 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.208 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.208 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.208 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.208 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: ]] 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.468 request: 00:26:20.468 { 00:26:20.468 "name": "nvme0", 00:26:20.468 "trtype": "tcp", 00:26:20.468 "traddr": "10.0.0.1", 00:26:20.468 "adrfam": "ipv4", 00:26:20.468 "trsvcid": "4420", 00:26:20.468 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:20.468 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:20.468 "prchk_reftag": false, 00:26:20.468 "prchk_guard": false, 00:26:20.468 "hdgst": false, 00:26:20.468 "ddgst": false, 00:26:20.468 "allow_unrecognized_csi": false, 00:26:20.468 "method": "bdev_nvme_attach_controller", 00:26:20.468 "req_id": 1 00:26:20.468 } 00:26:20.468 Got JSON-RPC error response 00:26:20.468 response: 00:26:20.468 { 00:26:20.468 "code": -5, 00:26:20.468 "message": "Input/output error" 00:26:20.468 } 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.468 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.469 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:20.469 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:20.469 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:20.469 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:20.469 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.469 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:20.469 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.469 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:20.469 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.469 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.469 request: 00:26:20.469 { 00:26:20.469 "name": "nvme0", 00:26:20.469 "trtype": "tcp", 00:26:20.469 "traddr": "10.0.0.1", 00:26:20.469 "adrfam": "ipv4", 00:26:20.469 "trsvcid": "4420", 00:26:20.469 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:20.469 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:20.469 "prchk_reftag": false, 00:26:20.469 "prchk_guard": false, 00:26:20.469 "hdgst": false, 00:26:20.469 "ddgst": false, 00:26:20.469 "dhchap_key": "key2", 00:26:20.469 "allow_unrecognized_csi": false, 00:26:20.469 "method": "bdev_nvme_attach_controller", 00:26:20.469 "req_id": 1 00:26:20.469 } 00:26:20.469 Got JSON-RPC error response 00:26:20.469 response: 00:26:20.469 { 00:26:20.469 "code": -5, 00:26:20.469 "message": "Input/output error" 00:26:20.469 } 00:26:20.469 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:20.469 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:20.469 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:20.469 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:20.469 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:20.469 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.469 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:20.469 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.469 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.469 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.729 request: 00:26:20.729 { 00:26:20.729 "name": "nvme0", 00:26:20.729 "trtype": "tcp", 00:26:20.729 "traddr": "10.0.0.1", 00:26:20.729 "adrfam": "ipv4", 00:26:20.729 "trsvcid": "4420", 00:26:20.729 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:20.729 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:20.729 "prchk_reftag": false, 00:26:20.729 "prchk_guard": false, 00:26:20.729 "hdgst": false, 00:26:20.729 "ddgst": false, 00:26:20.729 "dhchap_key": "key1", 00:26:20.729 "dhchap_ctrlr_key": "ckey2", 00:26:20.729 "allow_unrecognized_csi": false, 00:26:20.729 "method": "bdev_nvme_attach_controller", 00:26:20.729 "req_id": 1 00:26:20.729 } 00:26:20.729 Got JSON-RPC error response 00:26:20.729 response: 00:26:20.729 { 00:26:20.729 "code": -5, 00:26:20.729 "message": "Input/output error" 00:26:20.729 } 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.729 13:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.729 nvme0n1 00:26:20.729 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.729 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:20.729 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.729 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:20.729 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.729 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:20.729 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:26:20.729 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:26:20.729 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:20.729 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:20.729 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:26:20.729 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: ]] 00:26:20.729 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:26:20.729 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.729 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.729 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.989 request: 00:26:20.989 { 00:26:20.989 "name": "nvme0", 00:26:20.989 "dhchap_key": "key1", 00:26:20.989 "dhchap_ctrlr_key": "ckey2", 00:26:20.989 "method": "bdev_nvme_set_keys", 00:26:20.989 "req_id": 1 00:26:20.989 } 00:26:20.989 Got JSON-RPC error response 00:26:20.989 response: 00:26:20.989 { 00:26:20.989 "code": -13, 00:26:20.989 "message": "Permission denied" 00:26:20.989 } 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:20.989 13:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:21.927 13:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.927 13:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:21.927 13:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.927 13:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.187 13:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.187 13:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:22.187 13:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2FhOGE5ODI2MDhiZjE2MDdiNmViYjIyZDAyODcyZTI2NzJhOTUwZTg2MmE3MzgwxwJ+cg==: 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: ]] 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjgwZDU5NjAwYzczYmE1Y2U1N2E0OWUxOTBiYjI1MTg3ZmQ0MTY1MTFiMTkyZGY4dnToPA==: 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.125 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.384 nvme0n1 00:26:23.384 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.384 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:23.384 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.384 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:23.384 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:23.384 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:23.384 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:26:23.384 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:26:23.384 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:23.384 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDZjODJiYTg5M2JhZDRiMDMzYzQwYzQ5YTRlNTI0NjSZQ+HT: 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: ]] 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2ZhOWFiZDg0MWJhYjJiZGY4NzBlNzI4YzNiM2M5MTZAXBac: 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.385 request: 00:26:23.385 { 00:26:23.385 "name": "nvme0", 00:26:23.385 "dhchap_key": "key2", 00:26:23.385 "dhchap_ctrlr_key": "ckey1", 00:26:23.385 "method": "bdev_nvme_set_keys", 00:26:23.385 "req_id": 1 00:26:23.385 } 00:26:23.385 Got JSON-RPC error response 00:26:23.385 response: 00:26:23.385 { 00:26:23.385 "code": -13, 00:26:23.385 "message": "Permission denied" 00:26:23.385 } 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:23.385 13:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:24.323 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.323 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:24.323 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.323 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.323 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:24.582 rmmod nvme_tcp 00:26:24.582 rmmod nvme_fabrics 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2975202 ']' 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2975202 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2975202 ']' 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2975202 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2975202 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2975202' 00:26:24.582 killing process with pid 2975202 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2975202 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2975202 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:24.582 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:24.841 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:24.841 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:24.841 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.841 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.841 13:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.746 13:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:26.746 13:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:26.746 13:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:26.746 13:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:26.746 13:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:26.746 13:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:26:26.746 13:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:26.746 13:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:26.746 13:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:26.746 13:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:26.746 13:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:26.746 13:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:26.746 13:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:30.039 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:30.039 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:30.039 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:30.039 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:30.039 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:30.039 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:30.039 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:30.039 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:30.039 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:30.039 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:30.039 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:30.039 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:30.039 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:30.039 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:30.039 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:30.039 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:30.609 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:30.609 13:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.zKl /tmp/spdk.key-null.j47 /tmp/spdk.key-sha256.Nzb /tmp/spdk.key-sha384.Lgq /tmp/spdk.key-sha512.oba /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:30.609 13:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:33.905 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:33.905 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:33.905 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:33.905 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:33.905 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:33.905 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:33.905 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:33.905 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:33.906 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:33.906 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:33.906 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:33.906 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:33.906 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:33.906 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:33.906 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:33.906 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:33.906 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:33.906 00:26:33.906 real 0m54.813s 00:26:33.906 user 0m49.597s 00:26:33.906 sys 0m12.734s 00:26:33.906 13:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:33.906 13:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.906 ************************************ 00:26:33.906 END TEST nvmf_auth_host 00:26:33.906 ************************************ 00:26:33.906 13:18:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:33.906 13:18:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:33.906 13:18:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:33.906 13:18:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:33.906 13:18:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.906 ************************************ 00:26:33.906 START TEST nvmf_digest 00:26:33.906 ************************************ 00:26:33.906 13:18:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:33.906 * Looking for test storage... 00:26:33.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:33.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.906 --rc genhtml_branch_coverage=1 00:26:33.906 --rc genhtml_function_coverage=1 00:26:33.906 --rc genhtml_legend=1 00:26:33.906 --rc geninfo_all_blocks=1 00:26:33.906 --rc geninfo_unexecuted_blocks=1 00:26:33.906 00:26:33.906 ' 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:33.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.906 --rc genhtml_branch_coverage=1 00:26:33.906 --rc genhtml_function_coverage=1 00:26:33.906 --rc genhtml_legend=1 00:26:33.906 --rc geninfo_all_blocks=1 00:26:33.906 --rc geninfo_unexecuted_blocks=1 00:26:33.906 00:26:33.906 ' 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:33.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.906 --rc genhtml_branch_coverage=1 00:26:33.906 --rc genhtml_function_coverage=1 00:26:33.906 --rc genhtml_legend=1 00:26:33.906 --rc geninfo_all_blocks=1 00:26:33.906 --rc geninfo_unexecuted_blocks=1 00:26:33.906 00:26:33.906 ' 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:33.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.906 --rc genhtml_branch_coverage=1 00:26:33.906 --rc genhtml_function_coverage=1 00:26:33.906 --rc genhtml_legend=1 00:26:33.906 --rc geninfo_all_blocks=1 00:26:33.906 --rc geninfo_unexecuted_blocks=1 00:26:33.906 00:26:33.906 ' 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.906 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:33.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:33.907 13:18:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:40.486 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:40.486 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:40.486 Found net devices under 0000:86:00.0: cvl_0_0 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:40.486 Found net devices under 0000:86:00.1: cvl_0_1 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:40.486 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:40.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:26:40.487 00:26:40.487 --- 10.0.0.2 ping statistics --- 00:26:40.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.487 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:26:40.487 00:26:40.487 --- 10.0.0.1 ping statistics --- 00:26:40.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.487 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.487 13:18:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:40.487 ************************************ 00:26:40.487 START TEST nvmf_digest_clean 00:26:40.487 ************************************ 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2989716 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2989716 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2989716 ']' 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:40.487 [2024-11-19 13:18:43.122994] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:26:40.487 [2024-11-19 13:18:43.123038] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.487 [2024-11-19 13:18:43.203380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.487 [2024-11-19 13:18:43.241747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.487 [2024-11-19 13:18:43.241783] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.487 [2024-11-19 13:18:43.241791] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:40.487 [2024-11-19 13:18:43.241798] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:40.487 [2024-11-19 13:18:43.241803] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.487 [2024-11-19 13:18:43.242383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:40.487 null0 00:26:40.487 [2024-11-19 13:18:43.402147] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.487 [2024-11-19 13:18:43.426353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2989736 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2989736 /var/tmp/bperf.sock 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2989736 ']' 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:40.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:40.487 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:40.488 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:40.488 [2024-11-19 13:18:43.478507] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:26:40.488 [2024-11-19 13:18:43.478550] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2989736 ] 00:26:40.488 [2024-11-19 13:18:43.551921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.488 [2024-11-19 13:18:43.592600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.488 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:40.488 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:40.488 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:40.488 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:40.488 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:40.747 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:40.747 13:18:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:41.007 nvme0n1 00:26:41.007 13:18:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:41.007 13:18:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:41.007 Running I/O for 2 seconds... 00:26:43.327 24779.00 IOPS, 96.79 MiB/s [2024-11-19T12:18:46.704Z] 24888.50 IOPS, 97.22 MiB/s 00:26:43.327 Latency(us) 00:26:43.327 [2024-11-19T12:18:46.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.327 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:43.327 nvme0n1 : 2.04 24405.51 95.33 0.00 0.00 5137.33 2692.67 44906.41 00:26:43.327 [2024-11-19T12:18:46.704Z] =================================================================================================================== 00:26:43.327 [2024-11-19T12:18:46.704Z] Total : 24405.51 95.33 0.00 0.00 5137.33 2692.67 44906.41 00:26:43.327 { 00:26:43.327 "results": [ 00:26:43.327 { 00:26:43.327 "job": "nvme0n1", 00:26:43.327 "core_mask": "0x2", 00:26:43.327 "workload": "randread", 00:26:43.327 "status": "finished", 00:26:43.327 "queue_depth": 128, 00:26:43.327 "io_size": 4096, 00:26:43.327 "runtime": 2.044825, 00:26:43.327 "iops": 24405.511474087023, 00:26:43.327 "mibps": 95.33402919565243, 00:26:43.327 "io_failed": 0, 00:26:43.327 "io_timeout": 0, 00:26:43.327 "avg_latency_us": 5137.331620304665, 00:26:43.327 "min_latency_us": 2692.6747826086958, 00:26:43.327 "max_latency_us": 44906.406956521736 00:26:43.327 } 00:26:43.327 ], 00:26:43.327 "core_count": 1 00:26:43.327 } 00:26:43.327 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:43.327 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:43.327 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:43.327 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:43.327 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:43.327 | select(.opcode=="crc32c") 00:26:43.327 | "\(.module_name) \(.executed)"' 00:26:43.327 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:43.327 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:43.327 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:43.327 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:43.327 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2989736 00:26:43.327 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2989736 ']' 00:26:43.327 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2989736 00:26:43.327 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:43.327 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:43.327 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2989736 00:26:43.327 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:43.327 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:43.327 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2989736' 00:26:43.327 killing process with pid 2989736 00:26:43.327 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2989736 00:26:43.327 Received shutdown signal, test time was about 2.000000 seconds 00:26:43.327 00:26:43.327 Latency(us) 00:26:43.327 [2024-11-19T12:18:46.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.327 [2024-11-19T12:18:46.704Z] =================================================================================================================== 00:26:43.327 [2024-11-19T12:18:46.704Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:43.327 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2989736 00:26:43.587 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:43.587 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:43.587 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:43.587 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:43.587 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:43.587 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:43.587 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:43.587 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2990218 00:26:43.587 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2990218 /var/tmp/bperf.sock 00:26:43.587 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:43.587 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2990218 ']' 00:26:43.587 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:43.587 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:43.587 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:43.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:43.587 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:43.587 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:43.588 [2024-11-19 13:18:46.822518] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:26:43.588 [2024-11-19 13:18:46.822566] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2990218 ] 00:26:43.588 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:43.588 Zero copy mechanism will not be used. 00:26:43.588 [2024-11-19 13:18:46.897097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.588 [2024-11-19 13:18:46.940096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.848 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:43.848 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:43.848 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:43.848 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:43.848 13:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:44.108 13:18:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:44.108 13:18:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:44.367 nvme0n1 00:26:44.367 13:18:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:44.367 13:18:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:44.367 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:44.367 Zero copy mechanism will not be used. 00:26:44.367 Running I/O for 2 seconds... 00:26:46.688 5766.00 IOPS, 720.75 MiB/s [2024-11-19T12:18:50.065Z] 5824.00 IOPS, 728.00 MiB/s 00:26:46.688 Latency(us) 00:26:46.688 [2024-11-19T12:18:50.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.688 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:46.688 nvme0n1 : 2.00 5822.77 727.85 0.00 0.00 2744.99 630.43 11055.64 00:26:46.688 [2024-11-19T12:18:50.065Z] =================================================================================================================== 00:26:46.688 [2024-11-19T12:18:50.065Z] Total : 5822.77 727.85 0.00 0.00 2744.99 630.43 11055.64 00:26:46.688 { 00:26:46.688 "results": [ 00:26:46.688 { 00:26:46.688 "job": "nvme0n1", 00:26:46.688 "core_mask": "0x2", 00:26:46.688 "workload": "randread", 00:26:46.688 "status": "finished", 00:26:46.688 "queue_depth": 16, 00:26:46.688 "io_size": 131072, 00:26:46.688 "runtime": 2.00317, 00:26:46.688 "iops": 5822.770908110645, 00:26:46.688 "mibps": 727.8463635138306, 00:26:46.688 "io_failed": 0, 00:26:46.688 "io_timeout": 0, 00:26:46.688 "avg_latency_us": 2744.992253831932, 00:26:46.688 "min_latency_us": 630.4278260869565, 00:26:46.688 "max_latency_us": 11055.638260869566 00:26:46.688 } 00:26:46.688 ], 00:26:46.688 "core_count": 1 00:26:46.688 } 00:26:46.688 13:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:46.688 13:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:46.688 13:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:46.688 13:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:46.688 13:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:46.688 | select(.opcode=="crc32c") 00:26:46.688 | "\(.module_name) \(.executed)"' 00:26:46.688 13:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:46.688 13:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:46.688 13:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:46.688 13:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:46.688 13:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2990218 00:26:46.688 13:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2990218 ']' 00:26:46.688 13:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2990218 00:26:46.688 13:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:46.688 13:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:46.688 13:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2990218 00:26:46.688 13:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:46.688 13:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:46.688 13:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2990218' 00:26:46.688 killing process with pid 2990218 00:26:46.688 13:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2990218 00:26:46.688 Received shutdown signal, test time was about 2.000000 seconds 00:26:46.688 00:26:46.688 Latency(us) 00:26:46.688 [2024-11-19T12:18:50.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.688 [2024-11-19T12:18:50.065Z] =================================================================================================================== 00:26:46.688 [2024-11-19T12:18:50.065Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:46.688 13:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2990218 00:26:46.948 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:46.948 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:46.948 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:46.948 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:46.948 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:46.948 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:46.948 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:46.948 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2990868 00:26:46.948 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2990868 /var/tmp/bperf.sock 00:26:46.948 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:46.948 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2990868 ']' 00:26:46.948 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:46.948 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:46.948 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:46.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:46.948 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:46.948 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:46.948 [2024-11-19 13:18:50.138130] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:26:46.948 [2024-11-19 13:18:50.138180] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2990868 ] 00:26:46.948 [2024-11-19 13:18:50.212670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.948 [2024-11-19 13:18:50.253854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.948 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:46.948 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:46.948 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:46.948 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:46.948 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:47.208 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:47.208 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:47.467 nvme0n1 00:26:47.467 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:47.467 13:18:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:47.727 Running I/O for 2 seconds... 00:26:49.607 26686.00 IOPS, 104.24 MiB/s [2024-11-19T12:18:52.984Z] 26783.00 IOPS, 104.62 MiB/s 00:26:49.607 Latency(us) 00:26:49.607 [2024-11-19T12:18:52.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.607 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:49.607 nvme0n1 : 2.00 26786.10 104.63 0.00 0.00 4772.08 3547.49 11910.46 00:26:49.607 [2024-11-19T12:18:52.984Z] =================================================================================================================== 00:26:49.607 [2024-11-19T12:18:52.984Z] Total : 26786.10 104.63 0.00 0.00 4772.08 3547.49 11910.46 00:26:49.607 { 00:26:49.607 "results": [ 00:26:49.607 { 00:26:49.607 "job": "nvme0n1", 00:26:49.607 "core_mask": "0x2", 00:26:49.607 "workload": "randwrite", 00:26:49.607 "status": "finished", 00:26:49.607 "queue_depth": 128, 00:26:49.607 "io_size": 4096, 00:26:49.607 "runtime": 2.004547, 00:26:49.607 "iops": 26786.10179756324, 00:26:49.607 "mibps": 104.6332101467314, 00:26:49.607 "io_failed": 0, 00:26:49.607 "io_timeout": 0, 00:26:49.607 "avg_latency_us": 4772.076115961463, 00:26:49.607 "min_latency_us": 3547.4921739130436, 00:26:49.607 "max_latency_us": 11910.455652173912 00:26:49.607 } 00:26:49.607 ], 00:26:49.607 "core_count": 1 00:26:49.607 } 00:26:49.607 13:18:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:49.607 13:18:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:49.607 13:18:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:49.607 13:18:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:49.607 | select(.opcode=="crc32c") 00:26:49.607 | "\(.module_name) \(.executed)"' 00:26:49.607 13:18:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:49.867 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:49.867 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:49.867 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:49.867 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:49.867 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2990868 00:26:49.867 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2990868 ']' 00:26:49.867 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2990868 00:26:49.867 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:49.867 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:49.867 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2990868 00:26:49.867 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:49.867 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:49.867 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2990868' 00:26:49.867 killing process with pid 2990868 00:26:49.867 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2990868 00:26:49.867 Received shutdown signal, test time was about 2.000000 seconds 00:26:49.867 00:26:49.867 Latency(us) 00:26:49.867 [2024-11-19T12:18:53.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.867 [2024-11-19T12:18:53.244Z] =================================================================================================================== 00:26:49.867 [2024-11-19T12:18:53.244Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:49.867 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2990868 00:26:50.128 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:50.128 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:50.128 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:50.128 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:50.128 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:50.128 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:50.128 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:50.128 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2991374 00:26:50.128 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2991374 /var/tmp/bperf.sock 00:26:50.128 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:50.128 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2991374 ']' 00:26:50.128 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:50.128 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:50.128 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:50.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:50.128 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:50.128 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:50.128 [2024-11-19 13:18:53.402374] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:26:50.128 [2024-11-19 13:18:53.402422] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2991374 ] 00:26:50.128 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:50.128 Zero copy mechanism will not be used. 00:26:50.128 [2024-11-19 13:18:53.476502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.417 [2024-11-19 13:18:53.522660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.417 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:50.417 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:50.417 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:50.417 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:50.417 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:50.688 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:50.688 13:18:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:50.965 nvme0n1 00:26:50.966 13:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:50.966 13:18:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:51.237 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:51.237 Zero copy mechanism will not be used. 00:26:51.237 Running I/O for 2 seconds... 00:26:53.115 6700.00 IOPS, 837.50 MiB/s [2024-11-19T12:18:56.492Z] 6249.00 IOPS, 781.12 MiB/s 00:26:53.115 Latency(us) 00:26:53.115 [2024-11-19T12:18:56.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.115 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:53.115 nvme0n1 : 2.00 6248.81 781.10 0.00 0.00 2556.44 1275.10 4929.45 00:26:53.115 [2024-11-19T12:18:56.492Z] =================================================================================================================== 00:26:53.115 [2024-11-19T12:18:56.492Z] Total : 6248.81 781.10 0.00 0.00 2556.44 1275.10 4929.45 00:26:53.115 { 00:26:53.115 "results": [ 00:26:53.115 { 00:26:53.115 "job": "nvme0n1", 00:26:53.115 "core_mask": "0x2", 00:26:53.115 "workload": "randwrite", 00:26:53.115 "status": "finished", 00:26:53.115 "queue_depth": 16, 00:26:53.115 "io_size": 131072, 00:26:53.115 "runtime": 2.00262, 00:26:53.115 "iops": 6248.814053589797, 00:26:53.115 "mibps": 781.1017566987247, 00:26:53.115 "io_failed": 0, 00:26:53.115 "io_timeout": 0, 00:26:53.115 "avg_latency_us": 2556.4409647629436, 00:26:53.115 "min_latency_us": 1275.1026086956522, 00:26:53.115 "max_latency_us": 4929.446956521739 00:26:53.115 } 00:26:53.115 ], 00:26:53.115 "core_count": 1 00:26:53.115 } 00:26:53.115 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:53.115 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:53.115 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:53.115 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:53.115 | select(.opcode=="crc32c") 00:26:53.115 | "\(.module_name) \(.executed)"' 00:26:53.115 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:53.374 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:53.374 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:53.374 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:53.374 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:53.374 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2991374 00:26:53.374 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2991374 ']' 00:26:53.374 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2991374 00:26:53.374 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:53.374 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:53.374 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2991374 00:26:53.374 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:53.374 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:53.374 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2991374' 00:26:53.374 killing process with pid 2991374 00:26:53.374 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2991374 00:26:53.375 Received shutdown signal, test time was about 2.000000 seconds 00:26:53.375 00:26:53.375 Latency(us) 00:26:53.375 [2024-11-19T12:18:56.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.375 [2024-11-19T12:18:56.752Z] =================================================================================================================== 00:26:53.375 [2024-11-19T12:18:56.752Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:53.375 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2991374 00:26:53.634 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2989716 00:26:53.634 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2989716 ']' 00:26:53.634 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2989716 00:26:53.634 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:53.634 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:53.634 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2989716 00:26:53.634 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:53.634 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:53.634 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2989716' 00:26:53.634 killing process with pid 2989716 00:26:53.634 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2989716 00:26:53.634 13:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2989716 00:26:53.893 00:26:53.893 real 0m13.955s 00:26:53.893 user 0m26.752s 00:26:53.893 sys 0m4.572s 00:26:53.893 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:53.893 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:53.893 ************************************ 00:26:53.893 END TEST nvmf_digest_clean 00:26:53.893 ************************************ 00:26:53.893 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:53.893 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:53.893 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:53.893 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:53.893 ************************************ 00:26:53.893 START TEST nvmf_digest_error 00:26:53.893 ************************************ 00:26:53.893 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:53.893 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:53.893 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:53.893 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:53.893 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:53.893 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2991996 00:26:53.893 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2991996 00:26:53.893 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:53.893 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2991996 ']' 00:26:53.893 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.893 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:53.893 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.893 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:53.893 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:53.893 [2024-11-19 13:18:57.147661] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:26:53.894 [2024-11-19 13:18:57.147712] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:53.894 [2024-11-19 13:18:57.227913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.894 [2024-11-19 13:18:57.267302] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:53.894 [2024-11-19 13:18:57.267338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:53.894 [2024-11-19 13:18:57.267346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:53.894 [2024-11-19 13:18:57.267353] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:53.894 [2024-11-19 13:18:57.267359] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:53.894 [2024-11-19 13:18:57.267968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:54.153 [2024-11-19 13:18:57.352454] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:54.153 null0 00:26:54.153 [2024-11-19 13:18:57.446415] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:54.153 [2024-11-19 13:18:57.470598] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2992121 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2992121 /var/tmp/bperf.sock 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2992121 ']' 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:54.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:54.153 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:54.153 [2024-11-19 13:18:57.526389] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:26:54.154 [2024-11-19 13:18:57.526431] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2992121 ] 00:26:54.413 [2024-11-19 13:18:57.601238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.413 [2024-11-19 13:18:57.643818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.413 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:54.413 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:54.413 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:54.413 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:54.672 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:54.672 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.672 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:54.672 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.672 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:54.672 13:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:54.931 nvme0n1 00:26:55.191 13:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:55.191 13:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.191 13:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:55.191 13:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.191 13:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:55.191 13:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:55.191 Running I/O for 2 seconds... 00:26:55.191 [2024-11-19 13:18:58.416572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.191 [2024-11-19 13:18:58.416608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.191 [2024-11-19 13:18:58.416619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.191 [2024-11-19 13:18:58.425722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.191 [2024-11-19 13:18:58.425749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.191 [2024-11-19 13:18:58.425759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.191 [2024-11-19 13:18:58.434942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.191 [2024-11-19 13:18:58.434976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.191 [2024-11-19 13:18:58.434984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.191 [2024-11-19 13:18:58.444858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.191 [2024-11-19 13:18:58.444882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.191 [2024-11-19 13:18:58.444892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.191 [2024-11-19 13:18:58.453759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.191 [2024-11-19 13:18:58.453782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.191 [2024-11-19 13:18:58.453791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.191 [2024-11-19 13:18:58.464273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.191 [2024-11-19 13:18:58.464296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.191 [2024-11-19 13:18:58.464304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.191 [2024-11-19 13:18:58.473857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.191 [2024-11-19 13:18:58.473879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.191 [2024-11-19 13:18:58.473892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.191 [2024-11-19 13:18:58.483802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.191 [2024-11-19 13:18:58.483824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.191 [2024-11-19 13:18:58.483832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.191 [2024-11-19 13:18:58.492757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.191 [2024-11-19 13:18:58.492778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.191 [2024-11-19 13:18:58.492787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.191 [2024-11-19 13:18:58.502771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.191 [2024-11-19 13:18:58.502792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.191 [2024-11-19 13:18:58.502802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.191 [2024-11-19 13:18:58.513173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.191 [2024-11-19 13:18:58.513196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.191 [2024-11-19 13:18:58.513205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.191 [2024-11-19 13:18:58.523974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.191 [2024-11-19 13:18:58.523995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.191 [2024-11-19 13:18:58.524003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.191 [2024-11-19 13:18:58.533323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.191 [2024-11-19 13:18:58.533343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.191 [2024-11-19 13:18:58.533351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.191 [2024-11-19 13:18:58.542076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.192 [2024-11-19 13:18:58.542098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.192 [2024-11-19 13:18:58.542106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.192 [2024-11-19 13:18:58.551496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.192 [2024-11-19 13:18:58.551518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.192 [2024-11-19 13:18:58.551526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.192 [2024-11-19 13:18:58.561392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.192 [2024-11-19 13:18:58.561415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.192 [2024-11-19 13:18:58.561424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.451 [2024-11-19 13:18:58.571856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.451 [2024-11-19 13:18:58.571878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.451 [2024-11-19 13:18:58.571886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.451 [2024-11-19 13:18:58.581420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.451 [2024-11-19 13:18:58.581441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.451 [2024-11-19 13:18:58.581449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.451 [2024-11-19 13:18:58.589898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.451 [2024-11-19 13:18:58.589920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.451 [2024-11-19 13:18:58.589929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.451 [2024-11-19 13:18:58.600041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.451 [2024-11-19 13:18:58.600063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.451 [2024-11-19 13:18:58.600072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.451 [2024-11-19 13:18:58.611907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.451 [2024-11-19 13:18:58.611929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.451 [2024-11-19 13:18:58.611938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.451 [2024-11-19 13:18:58.620863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.451 [2024-11-19 13:18:58.620884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.451 [2024-11-19 13:18:58.620892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.451 [2024-11-19 13:18:58.632681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.451 [2024-11-19 13:18:58.632702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.451 [2024-11-19 13:18:58.632711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.451 [2024-11-19 13:18:58.643670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.451 [2024-11-19 13:18:58.643692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.451 [2024-11-19 13:18:58.643704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.452 [2024-11-19 13:18:58.652508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.452 [2024-11-19 13:18:58.652531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.452 [2024-11-19 13:18:58.652540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.452 [2024-11-19 13:18:58.664444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.452 [2024-11-19 13:18:58.664466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.452 [2024-11-19 13:18:58.664474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.452 [2024-11-19 13:18:58.675551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.452 [2024-11-19 13:18:58.675573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.452 [2024-11-19 13:18:58.675581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.452 [2024-11-19 13:18:58.688542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.452 [2024-11-19 13:18:58.688565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.452 [2024-11-19 13:18:58.688574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.452 [2024-11-19 13:18:58.696770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.452 [2024-11-19 13:18:58.696791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.452 [2024-11-19 13:18:58.696800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.452 [2024-11-19 13:18:58.708781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.452 [2024-11-19 13:18:58.708803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.452 [2024-11-19 13:18:58.708811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.452 [2024-11-19 13:18:58.720003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.452 [2024-11-19 13:18:58.720025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.452 [2024-11-19 13:18:58.720033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.452 [2024-11-19 13:18:58.728529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.452 [2024-11-19 13:18:58.728550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.452 [2024-11-19 13:18:58.728558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.452 [2024-11-19 13:18:58.739190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.452 [2024-11-19 13:18:58.739215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.452 [2024-11-19 13:18:58.739224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.452 [2024-11-19 13:18:58.750857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.452 [2024-11-19 13:18:58.750879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.452 [2024-11-19 13:18:58.750887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.452 [2024-11-19 13:18:58.763493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.452 [2024-11-19 13:18:58.763515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.452 [2024-11-19 13:18:58.763524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.452 [2024-11-19 13:18:58.771607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.452 [2024-11-19 13:18:58.771628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.452 [2024-11-19 13:18:58.771636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.452 [2024-11-19 13:18:58.783743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.452 [2024-11-19 13:18:58.783764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.452 [2024-11-19 13:18:58.783772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.452 [2024-11-19 13:18:58.794957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.452 [2024-11-19 13:18:58.794978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.452 [2024-11-19 13:18:58.794987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.452 [2024-11-19 13:18:58.803202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.452 [2024-11-19 13:18:58.803223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.452 [2024-11-19 13:18:58.803231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.452 [2024-11-19 13:18:58.814402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.452 [2024-11-19 13:18:58.814423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.452 [2024-11-19 13:18:58.814431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.712 [2024-11-19 13:18:58.827397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.712 [2024-11-19 13:18:58.827418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.712 [2024-11-19 13:18:58.827426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.712 [2024-11-19 13:18:58.838697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.712 [2024-11-19 13:18:58.838718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.712 [2024-11-19 13:18:58.838726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.712 [2024-11-19 13:18:58.847503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.712 [2024-11-19 13:18:58.847523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.712 [2024-11-19 13:18:58.847531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.712 [2024-11-19 13:18:58.858887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.712 [2024-11-19 13:18:58.858908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.712 [2024-11-19 13:18:58.858916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.712 [2024-11-19 13:18:58.868740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.712 [2024-11-19 13:18:58.868762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.712 [2024-11-19 13:18:58.868770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.712 [2024-11-19 13:18:58.876691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.712 [2024-11-19 13:18:58.876712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.712 [2024-11-19 13:18:58.876721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.712 [2024-11-19 13:18:58.889502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.712 [2024-11-19 13:18:58.889523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.712 [2024-11-19 13:18:58.889532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.712 [2024-11-19 13:18:58.900669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.712 [2024-11-19 13:18:58.900690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.712 [2024-11-19 13:18:58.900698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.712 [2024-11-19 13:18:58.909304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.712 [2024-11-19 13:18:58.909324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.712 [2024-11-19 13:18:58.909333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.712 [2024-11-19 13:18:58.922248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.712 [2024-11-19 13:18:58.922268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.712 [2024-11-19 13:18:58.922280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.712 [2024-11-19 13:18:58.934828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.712 [2024-11-19 13:18:58.934849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.712 [2024-11-19 13:18:58.934858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.712 [2024-11-19 13:18:58.945609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.712 [2024-11-19 13:18:58.945631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.712 [2024-11-19 13:18:58.945640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.712 [2024-11-19 13:18:58.954329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.712 [2024-11-19 13:18:58.954351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.712 [2024-11-19 13:18:58.954360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.712 [2024-11-19 13:18:58.965683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.712 [2024-11-19 13:18:58.965705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.712 [2024-11-19 13:18:58.965713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.712 [2024-11-19 13:18:58.977880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.712 [2024-11-19 13:18:58.977902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.712 [2024-11-19 13:18:58.977910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.712 [2024-11-19 13:18:58.989988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.712 [2024-11-19 13:18:58.990009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.712 [2024-11-19 13:18:58.990017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.712 [2024-11-19 13:18:59.002249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.712 [2024-11-19 13:18:59.002270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.712 [2024-11-19 13:18:59.002279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.712 [2024-11-19 13:18:59.011190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.712 [2024-11-19 13:18:59.011212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.712 [2024-11-19 13:18:59.011221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.712 [2024-11-19 13:18:59.022767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.712 [2024-11-19 13:18:59.022789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.712 [2024-11-19 13:18:59.022798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.713 [2024-11-19 13:18:59.030762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.713 [2024-11-19 13:18:59.030783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.713 [2024-11-19 13:18:59.030792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.713 [2024-11-19 13:18:59.042604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.713 [2024-11-19 13:18:59.042625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.713 [2024-11-19 13:18:59.042634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.713 [2024-11-19 13:18:59.053688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.713 [2024-11-19 13:18:59.053709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.713 [2024-11-19 13:18:59.053717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.713 [2024-11-19 13:18:59.064920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.713 [2024-11-19 13:18:59.064942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.713 [2024-11-19 13:18:59.064956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.713 [2024-11-19 13:18:59.074593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.713 [2024-11-19 13:18:59.074614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.713 [2024-11-19 13:18:59.074622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.713 [2024-11-19 13:18:59.083668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.713 [2024-11-19 13:18:59.083690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.713 [2024-11-19 13:18:59.083698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.973 [2024-11-19 13:18:59.094742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.973 [2024-11-19 13:18:59.094761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.973 [2024-11-19 13:18:59.094769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.973 [2024-11-19 13:18:59.102209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.973 [2024-11-19 13:18:59.102229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.973 [2024-11-19 13:18:59.102241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.973 [2024-11-19 13:18:59.112660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.973 [2024-11-19 13:18:59.112680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.973 [2024-11-19 13:18:59.112689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.973 [2024-11-19 13:18:59.123269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.973 [2024-11-19 13:18:59.123290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.973 [2024-11-19 13:18:59.123298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.973 [2024-11-19 13:18:59.132690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.973 [2024-11-19 13:18:59.132710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.973 [2024-11-19 13:18:59.132719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.973 [2024-11-19 13:18:59.141689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.973 [2024-11-19 13:18:59.141709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.973 [2024-11-19 13:18:59.141717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.973 [2024-11-19 13:18:59.150684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.973 [2024-11-19 13:18:59.150704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.973 [2024-11-19 13:18:59.150713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.973 [2024-11-19 13:18:59.160928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.973 [2024-11-19 13:18:59.160955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.973 [2024-11-19 13:18:59.160964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.973 [2024-11-19 13:18:59.173666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.973 [2024-11-19 13:18:59.173687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.973 [2024-11-19 13:18:59.173696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.973 [2024-11-19 13:18:59.182376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.973 [2024-11-19 13:18:59.182395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.973 [2024-11-19 13:18:59.182403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.973 [2024-11-19 13:18:59.193054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.973 [2024-11-19 13:18:59.193079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.973 [2024-11-19 13:18:59.193087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.973 [2024-11-19 13:18:59.202526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.973 [2024-11-19 13:18:59.202547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.973 [2024-11-19 13:18:59.202555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.973 [2024-11-19 13:18:59.212833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.973 [2024-11-19 13:18:59.212854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.973 [2024-11-19 13:18:59.212862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.973 [2024-11-19 13:18:59.222432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.973 [2024-11-19 13:18:59.222453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.973 [2024-11-19 13:18:59.222462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.973 [2024-11-19 13:18:59.231020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.973 [2024-11-19 13:18:59.231041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.973 [2024-11-19 13:18:59.231050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.973 [2024-11-19 13:18:59.241204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.973 [2024-11-19 13:18:59.241225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.973 [2024-11-19 13:18:59.241233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.973 [2024-11-19 13:18:59.251796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.973 [2024-11-19 13:18:59.251817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.973 [2024-11-19 13:18:59.251826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.973 [2024-11-19 13:18:59.262270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.973 [2024-11-19 13:18:59.262291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.973 [2024-11-19 13:18:59.262299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.973 [2024-11-19 13:18:59.271231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.973 [2024-11-19 13:18:59.271251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.973 [2024-11-19 13:18:59.271259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.973 [2024-11-19 13:18:59.280140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.973 [2024-11-19 13:18:59.280162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.973 [2024-11-19 13:18:59.280170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.973 [2024-11-19 13:18:59.289937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.974 [2024-11-19 13:18:59.289963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.974 [2024-11-19 13:18:59.289972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.974 [2024-11-19 13:18:59.298370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.974 [2024-11-19 13:18:59.298390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.974 [2024-11-19 13:18:59.298398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.974 [2024-11-19 13:18:59.310625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.974 [2024-11-19 13:18:59.310645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.974 [2024-11-19 13:18:59.310653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.974 [2024-11-19 13:18:59.321515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.974 [2024-11-19 13:18:59.321535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.974 [2024-11-19 13:18:59.321544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.974 [2024-11-19 13:18:59.330868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.974 [2024-11-19 13:18:59.330888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.974 [2024-11-19 13:18:59.330896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:55.974 [2024-11-19 13:18:59.339927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:55.974 [2024-11-19 13:18:59.339954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.974 [2024-11-19 13:18:59.339963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.233 [2024-11-19 13:18:59.350072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.233 [2024-11-19 13:18:59.350092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.233 [2024-11-19 13:18:59.350100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.233 [2024-11-19 13:18:59.361446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.233 [2024-11-19 13:18:59.361467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.233 [2024-11-19 13:18:59.361479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.233 [2024-11-19 13:18:59.370770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.233 [2024-11-19 13:18:59.370791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.233 [2024-11-19 13:18:59.370799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.233 [2024-11-19 13:18:59.380782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.233 [2024-11-19 13:18:59.380802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.233 [2024-11-19 13:18:59.380811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.233 [2024-11-19 13:18:59.390330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.233 [2024-11-19 13:18:59.390351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.233 [2024-11-19 13:18:59.390359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.233 24706.00 IOPS, 96.51 MiB/s [2024-11-19T12:18:59.610Z] [2024-11-19 13:18:59.402953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.233 [2024-11-19 13:18:59.402974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.233 [2024-11-19 13:18:59.402982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.233 [2024-11-19 13:18:59.412924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.233 [2024-11-19 13:18:59.412944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.233 [2024-11-19 13:18:59.412956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.233 [2024-11-19 13:18:59.423832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.233 [2024-11-19 13:18:59.423852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.233 [2024-11-19 13:18:59.423861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.233 [2024-11-19 13:18:59.433319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.233 [2024-11-19 13:18:59.433339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.233 [2024-11-19 13:18:59.433347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.233 [2024-11-19 13:18:59.441697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.233 [2024-11-19 13:18:59.441717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.233 [2024-11-19 13:18:59.441725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.233 [2024-11-19 13:18:59.454956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.233 [2024-11-19 13:18:59.454977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.233 [2024-11-19 13:18:59.454985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.233 [2024-11-19 13:18:59.464857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.233 [2024-11-19 13:18:59.464878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.233 [2024-11-19 13:18:59.464886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.233 [2024-11-19 13:18:59.473387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.233 [2024-11-19 13:18:59.473407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.233 [2024-11-19 13:18:59.473415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.233 [2024-11-19 13:18:59.482575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.233 [2024-11-19 13:18:59.482596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.233 [2024-11-19 13:18:59.482604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.233 [2024-11-19 13:18:59.491731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.233 [2024-11-19 13:18:59.491752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.233 [2024-11-19 13:18:59.491760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.233 [2024-11-19 13:18:59.501266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.233 [2024-11-19 13:18:59.501286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.233 [2024-11-19 13:18:59.501295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.233 [2024-11-19 13:18:59.511577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.233 [2024-11-19 13:18:59.511598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.233 [2024-11-19 13:18:59.511607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.233 [2024-11-19 13:18:59.521943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.233 [2024-11-19 13:18:59.521969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.233 [2024-11-19 13:18:59.521977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.233 [2024-11-19 13:18:59.530355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.234 [2024-11-19 13:18:59.530375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.234 [2024-11-19 13:18:59.530387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.234 [2024-11-19 13:18:59.540218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.234 [2024-11-19 13:18:59.540239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.234 [2024-11-19 13:18:59.540247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.234 [2024-11-19 13:18:59.550447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.234 [2024-11-19 13:18:59.550468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.234 [2024-11-19 13:18:59.550476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.234 [2024-11-19 13:18:59.559641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.234 [2024-11-19 13:18:59.559661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.234 [2024-11-19 13:18:59.559669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.234 [2024-11-19 13:18:59.569140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.234 [2024-11-19 13:18:59.569161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.234 [2024-11-19 13:18:59.569169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.234 [2024-11-19 13:18:59.578851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.234 [2024-11-19 13:18:59.578871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.234 [2024-11-19 13:18:59.578879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.234 [2024-11-19 13:18:59.590993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.234 [2024-11-19 13:18:59.591014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.234 [2024-11-19 13:18:59.591023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.234 [2024-11-19 13:18:59.602616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.234 [2024-11-19 13:18:59.602636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.234 [2024-11-19 13:18:59.602645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.493 [2024-11-19 13:18:59.612367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.493 [2024-11-19 13:18:59.612388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.493 [2024-11-19 13:18:59.612396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.493 [2024-11-19 13:18:59.621480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.493 [2024-11-19 13:18:59.621504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.493 [2024-11-19 13:18:59.621513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.493 [2024-11-19 13:18:59.631677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.493 [2024-11-19 13:18:59.631699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.493 [2024-11-19 13:18:59.631708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.493 [2024-11-19 13:18:59.640712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.493 [2024-11-19 13:18:59.640732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.493 [2024-11-19 13:18:59.640740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.493 [2024-11-19 13:18:59.650450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.493 [2024-11-19 13:18:59.650470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.493 [2024-11-19 13:18:59.650478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.493 [2024-11-19 13:18:59.659222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.493 [2024-11-19 13:18:59.659242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.493 [2024-11-19 13:18:59.659250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.493 [2024-11-19 13:18:59.668324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.493 [2024-11-19 13:18:59.668345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.493 [2024-11-19 13:18:59.668353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.493 [2024-11-19 13:18:59.678868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.493 [2024-11-19 13:18:59.678888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.493 [2024-11-19 13:18:59.678896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.493 [2024-11-19 13:18:59.687743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.493 [2024-11-19 13:18:59.687764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.493 [2024-11-19 13:18:59.687772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.493 [2024-11-19 13:18:59.697048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.493 [2024-11-19 13:18:59.697069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.493 [2024-11-19 13:18:59.697078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.493 [2024-11-19 13:18:59.706818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.494 [2024-11-19 13:18:59.706839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.494 [2024-11-19 13:18:59.706847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.494 [2024-11-19 13:18:59.716933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.494 [2024-11-19 13:18:59.716959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.494 [2024-11-19 13:18:59.716967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.494 [2024-11-19 13:18:59.727691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.494 [2024-11-19 13:18:59.727711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.494 [2024-11-19 13:18:59.727718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.494 [2024-11-19 13:18:59.737844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.494 [2024-11-19 13:18:59.737866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.494 [2024-11-19 13:18:59.737874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.494 [2024-11-19 13:18:59.746485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.494 [2024-11-19 13:18:59.746507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.494 [2024-11-19 13:18:59.746515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.494 [2024-11-19 13:18:59.757168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.494 [2024-11-19 13:18:59.757190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.494 [2024-11-19 13:18:59.757198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.494 [2024-11-19 13:18:59.768700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.494 [2024-11-19 13:18:59.768722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.494 [2024-11-19 13:18:59.768731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.494 [2024-11-19 13:18:59.778687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.494 [2024-11-19 13:18:59.778709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.494 [2024-11-19 13:18:59.778717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.494 [2024-11-19 13:18:59.788320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.494 [2024-11-19 13:18:59.788342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.494 [2024-11-19 13:18:59.788354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.494 [2024-11-19 13:18:59.797315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.494 [2024-11-19 13:18:59.797336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.494 [2024-11-19 13:18:59.797345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.494 [2024-11-19 13:18:59.807730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.494 [2024-11-19 13:18:59.807751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.494 [2024-11-19 13:18:59.807760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.494 [2024-11-19 13:18:59.815848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.494 [2024-11-19 13:18:59.815870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.494 [2024-11-19 13:18:59.815878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.494 [2024-11-19 13:18:59.827004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.494 [2024-11-19 13:18:59.827024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.494 [2024-11-19 13:18:59.827033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.494 [2024-11-19 13:18:59.837873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.494 [2024-11-19 13:18:59.837894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.494 [2024-11-19 13:18:59.837903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.494 [2024-11-19 13:18:59.847198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.494 [2024-11-19 13:18:59.847220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.494 [2024-11-19 13:18:59.847228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.494 [2024-11-19 13:18:59.856839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.494 [2024-11-19 13:18:59.856860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.494 [2024-11-19 13:18:59.856868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.754 [2024-11-19 13:18:59.869368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.754 [2024-11-19 13:18:59.869389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.754 [2024-11-19 13:18:59.869398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.754 [2024-11-19 13:18:59.878974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.754 [2024-11-19 13:18:59.878995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.754 [2024-11-19 13:18:59.879003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.754 [2024-11-19 13:18:59.887991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.754 [2024-11-19 13:18:59.888012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.754 [2024-11-19 13:18:59.888021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.754 [2024-11-19 13:18:59.898537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.754 [2024-11-19 13:18:59.898557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.754 [2024-11-19 13:18:59.898565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.754 [2024-11-19 13:18:59.908343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.754 [2024-11-19 13:18:59.908363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.754 [2024-11-19 13:18:59.908371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.754 [2024-11-19 13:18:59.917114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.754 [2024-11-19 13:18:59.917135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.755 [2024-11-19 13:18:59.917144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.755 [2024-11-19 13:18:59.927606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.755 [2024-11-19 13:18:59.927627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.755 [2024-11-19 13:18:59.927636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.755 [2024-11-19 13:18:59.936009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.755 [2024-11-19 13:18:59.936030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.755 [2024-11-19 13:18:59.936038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.755 [2024-11-19 13:18:59.945445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.755 [2024-11-19 13:18:59.945465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.755 [2024-11-19 13:18:59.945473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.755 [2024-11-19 13:18:59.956673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.755 [2024-11-19 13:18:59.956694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.755 [2024-11-19 13:18:59.956706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.755 [2024-11-19 13:18:59.966561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.755 [2024-11-19 13:18:59.966583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.755 [2024-11-19 13:18:59.966591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.755 [2024-11-19 13:18:59.975267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.755 [2024-11-19 13:18:59.975288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.755 [2024-11-19 13:18:59.975296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.755 [2024-11-19 13:18:59.987066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.755 [2024-11-19 13:18:59.987088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.755 [2024-11-19 13:18:59.987096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.755 [2024-11-19 13:18:59.999895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.755 [2024-11-19 13:18:59.999917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.755 [2024-11-19 13:18:59.999926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.755 [2024-11-19 13:19:00.013866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.755 [2024-11-19 13:19:00.013890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.755 [2024-11-19 13:19:00.013900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.755 [2024-11-19 13:19:00.023683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.755 [2024-11-19 13:19:00.023719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.755 [2024-11-19 13:19:00.023732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.755 [2024-11-19 13:19:00.039661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.755 [2024-11-19 13:19:00.039693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.755 [2024-11-19 13:19:00.039707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.755 [2024-11-19 13:19:00.052136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.755 [2024-11-19 13:19:00.052161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.755 [2024-11-19 13:19:00.052171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.755 [2024-11-19 13:19:00.064415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.755 [2024-11-19 13:19:00.064443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.755 [2024-11-19 13:19:00.064453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.755 [2024-11-19 13:19:00.073207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.755 [2024-11-19 13:19:00.073229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.755 [2024-11-19 13:19:00.073238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.755 [2024-11-19 13:19:00.083643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.755 [2024-11-19 13:19:00.083666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.755 [2024-11-19 13:19:00.083675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.755 [2024-11-19 13:19:00.094933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.755 [2024-11-19 13:19:00.094963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.755 [2024-11-19 13:19:00.094973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.755 [2024-11-19 13:19:00.106066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.755 [2024-11-19 13:19:00.106088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.755 [2024-11-19 13:19:00.106098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.755 [2024-11-19 13:19:00.115498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.755 [2024-11-19 13:19:00.115520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.755 [2024-11-19 13:19:00.115530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.755 [2024-11-19 13:19:00.125894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:56.755 [2024-11-19 13:19:00.125915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.755 [2024-11-19 13:19:00.125924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.016 [2024-11-19 13:19:00.137286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.016 [2024-11-19 13:19:00.137309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.016 [2024-11-19 13:19:00.137317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.016 [2024-11-19 13:19:00.146061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.016 [2024-11-19 13:19:00.146082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.016 [2024-11-19 13:19:00.146090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.016 [2024-11-19 13:19:00.158797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.016 [2024-11-19 13:19:00.158818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.016 [2024-11-19 13:19:00.158826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.016 [2024-11-19 13:19:00.171503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.016 [2024-11-19 13:19:00.171523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.016 [2024-11-19 13:19:00.171531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.016 [2024-11-19 13:19:00.184764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.016 [2024-11-19 13:19:00.184784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.016 [2024-11-19 13:19:00.184792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.016 [2024-11-19 13:19:00.195462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.016 [2024-11-19 13:19:00.195483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.016 [2024-11-19 13:19:00.195491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.016 [2024-11-19 13:19:00.205158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.016 [2024-11-19 13:19:00.205179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.016 [2024-11-19 13:19:00.205187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.016 [2024-11-19 13:19:00.215503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.016 [2024-11-19 13:19:00.215524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.016 [2024-11-19 13:19:00.215533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.016 [2024-11-19 13:19:00.224707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.016 [2024-11-19 13:19:00.224727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.016 [2024-11-19 13:19:00.224736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.016 [2024-11-19 13:19:00.233495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.016 [2024-11-19 13:19:00.233517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.016 [2024-11-19 13:19:00.233526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.016 [2024-11-19 13:19:00.247043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.016 [2024-11-19 13:19:00.247064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.016 [2024-11-19 13:19:00.247076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.016 [2024-11-19 13:19:00.258940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.016 [2024-11-19 13:19:00.258967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.016 [2024-11-19 13:19:00.258976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.016 [2024-11-19 13:19:00.267574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.016 [2024-11-19 13:19:00.267594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.016 [2024-11-19 13:19:00.267606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.016 [2024-11-19 13:19:00.280279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.016 [2024-11-19 13:19:00.280300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.016 [2024-11-19 13:19:00.280308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.016 [2024-11-19 13:19:00.291129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.016 [2024-11-19 13:19:00.291150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.016 [2024-11-19 13:19:00.291158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.016 [2024-11-19 13:19:00.303289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.016 [2024-11-19 13:19:00.303310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.016 [2024-11-19 13:19:00.303318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.016 [2024-11-19 13:19:00.313978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.016 [2024-11-19 13:19:00.313998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.016 [2024-11-19 13:19:00.314007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.016 [2024-11-19 13:19:00.327551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.016 [2024-11-19 13:19:00.327571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.016 [2024-11-19 13:19:00.327580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.016 [2024-11-19 13:19:00.339536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.016 [2024-11-19 13:19:00.339557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.016 [2024-11-19 13:19:00.339565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.016 [2024-11-19 13:19:00.348228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.016 [2024-11-19 13:19:00.348249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.016 [2024-11-19 13:19:00.348258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.016 [2024-11-19 13:19:00.361045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.016 [2024-11-19 13:19:00.361067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.016 [2024-11-19 13:19:00.361075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.016 [2024-11-19 13:19:00.374413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.016 [2024-11-19 13:19:00.374433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.016 [2024-11-19 13:19:00.374442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.016 [2024-11-19 13:19:00.382991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.016 [2024-11-19 13:19:00.383010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.016 [2024-11-19 13:19:00.383019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.275 [2024-11-19 13:19:00.395420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.275 [2024-11-19 13:19:00.395441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.275 [2024-11-19 13:19:00.395449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.275 24573.50 IOPS, 95.99 MiB/s [2024-11-19T12:19:00.652Z] [2024-11-19 13:19:00.408214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1913370) 00:26:57.275 [2024-11-19 13:19:00.408236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.275 [2024-11-19 13:19:00.408244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.275 00:26:57.275 Latency(us) 00:26:57.275 [2024-11-19T12:19:00.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.275 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:57.275 nvme0n1 : 2.01 24569.43 95.97 0.00 0.00 5203.91 2649.93 18919.96 00:26:57.275 [2024-11-19T12:19:00.652Z] =================================================================================================================== 00:26:57.275 [2024-11-19T12:19:00.652Z] Total : 24569.43 95.97 0.00 0.00 5203.91 2649.93 18919.96 00:26:57.275 { 00:26:57.275 "results": [ 00:26:57.275 { 00:26:57.275 "job": "nvme0n1", 00:26:57.275 "core_mask": "0x2", 00:26:57.275 "workload": "randread", 00:26:57.275 "status": "finished", 00:26:57.275 "queue_depth": 128, 00:26:57.275 "io_size": 4096, 00:26:57.275 "runtime": 2.005541, 00:26:57.275 "iops": 24569.43039309593, 00:26:57.275 "mibps": 95.97433747303097, 00:26:57.275 "io_failed": 0, 00:26:57.275 "io_timeout": 0, 00:26:57.275 "avg_latency_us": 5203.906080674123, 00:26:57.275 "min_latency_us": 2649.9339130434782, 00:26:57.275 "max_latency_us": 18919.958260869564 00:26:57.275 } 00:26:57.275 ], 00:26:57.275 "core_count": 1 00:26:57.275 } 00:26:57.275 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:57.275 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:57.275 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:57.275 | .driver_specific 00:26:57.275 | .nvme_error 00:26:57.275 | .status_code 00:26:57.275 | .command_transient_transport_error' 00:26:57.275 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:57.275 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 193 > 0 )) 00:26:57.275 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2992121 00:26:57.275 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2992121 ']' 00:26:57.275 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2992121 00:26:57.275 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:57.275 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:57.275 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2992121 00:26:57.535 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:57.535 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:57.535 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2992121' 00:26:57.535 killing process with pid 2992121 00:26:57.535 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2992121 00:26:57.535 Received shutdown signal, test time was about 2.000000 seconds 00:26:57.535 00:26:57.535 Latency(us) 00:26:57.535 [2024-11-19T12:19:00.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.535 [2024-11-19T12:19:00.912Z] =================================================================================================================== 00:26:57.535 [2024-11-19T12:19:00.912Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:57.535 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2992121 00:26:57.535 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:57.535 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:57.535 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:57.535 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:57.535 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:57.535 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2992593 00:26:57.535 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2992593 /var/tmp/bperf.sock 00:26:57.535 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:57.535 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2992593 ']' 00:26:57.535 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:57.535 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:57.535 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:57.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:57.535 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:57.535 13:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:57.535 [2024-11-19 13:19:00.891627] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:26:57.535 [2024-11-19 13:19:00.891673] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2992593 ] 00:26:57.535 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:57.535 Zero copy mechanism will not be used. 00:26:57.794 [2024-11-19 13:19:00.967404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.794 [2024-11-19 13:19:01.006987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.794 13:19:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:57.794 13:19:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:57.794 13:19:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:57.794 13:19:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:58.053 13:19:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:58.053 13:19:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.053 13:19:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:58.053 13:19:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.053 13:19:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:58.054 13:19:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:58.313 nvme0n1 00:26:58.313 13:19:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:58.313 13:19:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.313 13:19:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:58.313 13:19:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.313 13:19:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:58.313 13:19:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:58.574 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:58.574 Zero copy mechanism will not be used. 00:26:58.574 Running I/O for 2 seconds... 00:26:58.574 [2024-11-19 13:19:01.730449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.574 [2024-11-19 13:19:01.730486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.574 [2024-11-19 13:19:01.730497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.574 [2024-11-19 13:19:01.735918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.574 [2024-11-19 13:19:01.735946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.574 [2024-11-19 13:19:01.735964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.574 [2024-11-19 13:19:01.741239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.574 [2024-11-19 13:19:01.741263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.574 [2024-11-19 13:19:01.741272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.574 [2024-11-19 13:19:01.746678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.574 [2024-11-19 13:19:01.746701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.574 [2024-11-19 13:19:01.746710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.574 [2024-11-19 13:19:01.751912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.574 [2024-11-19 13:19:01.751934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.574 [2024-11-19 13:19:01.751942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.574 [2024-11-19 13:19:01.757168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.574 [2024-11-19 13:19:01.757190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.574 [2024-11-19 13:19:01.757198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.574 [2024-11-19 13:19:01.762385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.574 [2024-11-19 13:19:01.762407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.574 [2024-11-19 13:19:01.762415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.574 [2024-11-19 13:19:01.767505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.767526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.767535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.770357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.770379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.770387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.775572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.775594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.775602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.780830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.780857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.780865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.786025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.786047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.786055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.791233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.791256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.791264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.796379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.796400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.796408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.801552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.801574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.801583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.806754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.806776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.806785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.811928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.811955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.811964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.817139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.817161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.817170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.822278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.822301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.822310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.827466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.827488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.827497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.832708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.832729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.832738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.837913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.837934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.837945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.843179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.843202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.843211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.848349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.848370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.848378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.853552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.853574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.853582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.858683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.858704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.858713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.863860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.863882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.863890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.869015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.869037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.869049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.874210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.874232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.874240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.879411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.879432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.879440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.884563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.884585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.884593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.889767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.889790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.889799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.894944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.894970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.894980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.900096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.575 [2024-11-19 13:19:01.900119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.575 [2024-11-19 13:19:01.900127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.575 [2024-11-19 13:19:01.905287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.576 [2024-11-19 13:19:01.905308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.576 [2024-11-19 13:19:01.905316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.576 [2024-11-19 13:19:01.910470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.576 [2024-11-19 13:19:01.910491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.576 [2024-11-19 13:19:01.910499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.576 [2024-11-19 13:19:01.915657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.576 [2024-11-19 13:19:01.915685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.576 [2024-11-19 13:19:01.915693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.576 [2024-11-19 13:19:01.920823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.576 [2024-11-19 13:19:01.920845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.576 [2024-11-19 13:19:01.920853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.576 [2024-11-19 13:19:01.926018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.576 [2024-11-19 13:19:01.926040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.576 [2024-11-19 13:19:01.926049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.576 [2024-11-19 13:19:01.931158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.576 [2024-11-19 13:19:01.931178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.576 [2024-11-19 13:19:01.931188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.576 [2024-11-19 13:19:01.936282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.576 [2024-11-19 13:19:01.936303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.576 [2024-11-19 13:19:01.936312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.576 [2024-11-19 13:19:01.941463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.576 [2024-11-19 13:19:01.941485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.576 [2024-11-19 13:19:01.941493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.576 [2024-11-19 13:19:01.946683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.576 [2024-11-19 13:19:01.946705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.576 [2024-11-19 13:19:01.946714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.836 [2024-11-19 13:19:01.951935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.836 [2024-11-19 13:19:01.951963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.836 [2024-11-19 13:19:01.951972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.836 [2024-11-19 13:19:01.957232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.836 [2024-11-19 13:19:01.957254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.836 [2024-11-19 13:19:01.957262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.836 [2024-11-19 13:19:01.962410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.836 [2024-11-19 13:19:01.962432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.836 [2024-11-19 13:19:01.962441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.836 [2024-11-19 13:19:01.967554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.836 [2024-11-19 13:19:01.967576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.836 [2024-11-19 13:19:01.967584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.836 [2024-11-19 13:19:01.972726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.836 [2024-11-19 13:19:01.972748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.836 [2024-11-19 13:19:01.972757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.836 [2024-11-19 13:19:01.977891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:01.977913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:01.977921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:01.983048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:01.983070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:01.983079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:01.988293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:01.988318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:01.988326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:01.993557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:01.993591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:01.993600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:01.998833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:01.998854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:01.998862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:02.004058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:02.004084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:02.004092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:02.009295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:02.009316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:02.009324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:02.014621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:02.014643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:02.014652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:02.019704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:02.019726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:02.019734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:02.025460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:02.025484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:02.025492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:02.032187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:02.032211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:02.032219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:02.039106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:02.039130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:02.039139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:02.045706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:02.045728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:02.045737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:02.051412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:02.051435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:02.051443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:02.056729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:02.056751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:02.056759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:02.061985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:02.062006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:02.062015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:02.067246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:02.067268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:02.067277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:02.072476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:02.072498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:02.072507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:02.077784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:02.077805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:02.077813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:02.083187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:02.083209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:02.083217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:02.088398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:02.088420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:02.088428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:02.093670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:02.093692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:02.093700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:02.098890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:02.098912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:02.098924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:02.104243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:02.104265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:02.104273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:02.109389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:02.109411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:02.109419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:02.114701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:02.114722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:02.114730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:02.120055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.837 [2024-11-19 13:19:02.120077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.837 [2024-11-19 13:19:02.120085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.837 [2024-11-19 13:19:02.125139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.838 [2024-11-19 13:19:02.125161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.838 [2024-11-19 13:19:02.125170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.838 [2024-11-19 13:19:02.130425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.838 [2024-11-19 13:19:02.130446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.838 [2024-11-19 13:19:02.130454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.838 [2024-11-19 13:19:02.135693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.838 [2024-11-19 13:19:02.135715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.838 [2024-11-19 13:19:02.135723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.838 [2024-11-19 13:19:02.141045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.838 [2024-11-19 13:19:02.141066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.838 [2024-11-19 13:19:02.141075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.838 [2024-11-19 13:19:02.146336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.838 [2024-11-19 13:19:02.146360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.838 [2024-11-19 13:19:02.146369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.838 [2024-11-19 13:19:02.151542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.838 [2024-11-19 13:19:02.151564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.838 [2024-11-19 13:19:02.151572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.838 [2024-11-19 13:19:02.156836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.838 [2024-11-19 13:19:02.156858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.838 [2024-11-19 13:19:02.156866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.838 [2024-11-19 13:19:02.162121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.838 [2024-11-19 13:19:02.162142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.838 [2024-11-19 13:19:02.162150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.838 [2024-11-19 13:19:02.167484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.838 [2024-11-19 13:19:02.167504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.838 [2024-11-19 13:19:02.167512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.838 [2024-11-19 13:19:02.172672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.838 [2024-11-19 13:19:02.172694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.838 [2024-11-19 13:19:02.172702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.838 [2024-11-19 13:19:02.177916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.838 [2024-11-19 13:19:02.177938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.838 [2024-11-19 13:19:02.177946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.838 [2024-11-19 13:19:02.183138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.838 [2024-11-19 13:19:02.183160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.838 [2024-11-19 13:19:02.183168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.838 [2024-11-19 13:19:02.188421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.838 [2024-11-19 13:19:02.188443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.838 [2024-11-19 13:19:02.188451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:58.838 [2024-11-19 13:19:02.193607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.838 [2024-11-19 13:19:02.193629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.838 [2024-11-19 13:19:02.193636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:58.838 [2024-11-19 13:19:02.198818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.838 [2024-11-19 13:19:02.198840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.838 [2024-11-19 13:19:02.198848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:58.838 [2024-11-19 13:19:02.204023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.838 [2024-11-19 13:19:02.204043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.838 [2024-11-19 13:19:02.204052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:58.838 [2024-11-19 13:19:02.209293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:58.838 [2024-11-19 13:19:02.209315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.838 [2024-11-19 13:19:02.209323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.099 [2024-11-19 13:19:02.214653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.099 [2024-11-19 13:19:02.214675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.099 [2024-11-19 13:19:02.214683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.099 [2024-11-19 13:19:02.219986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.099 [2024-11-19 13:19:02.220007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.099 [2024-11-19 13:19:02.220015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.099 [2024-11-19 13:19:02.225236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.099 [2024-11-19 13:19:02.225259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.099 [2024-11-19 13:19:02.225269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.099 [2024-11-19 13:19:02.230522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.099 [2024-11-19 13:19:02.230544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.099 [2024-11-19 13:19:02.230553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.099 [2024-11-19 13:19:02.235889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.099 [2024-11-19 13:19:02.235911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.099 [2024-11-19 13:19:02.235923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.099 [2024-11-19 13:19:02.241098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.099 [2024-11-19 13:19:02.241121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.099 [2024-11-19 13:19:02.241130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.099 [2024-11-19 13:19:02.246550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.099 [2024-11-19 13:19:02.246572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.099 [2024-11-19 13:19:02.246581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.099 [2024-11-19 13:19:02.251698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.099 [2024-11-19 13:19:02.251720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.099 [2024-11-19 13:19:02.251729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.099 [2024-11-19 13:19:02.256941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.099 [2024-11-19 13:19:02.256967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.099 [2024-11-19 13:19:02.256975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.099 [2024-11-19 13:19:02.262193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.099 [2024-11-19 13:19:02.262214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.099 [2024-11-19 13:19:02.262222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.099 [2024-11-19 13:19:02.267523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.099 [2024-11-19 13:19:02.267544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.099 [2024-11-19 13:19:02.267553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.099 [2024-11-19 13:19:02.272697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.099 [2024-11-19 13:19:02.272719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.099 [2024-11-19 13:19:02.272727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.099 [2024-11-19 13:19:02.277979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.099 [2024-11-19 13:19:02.278001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.099 [2024-11-19 13:19:02.278009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.099 [2024-11-19 13:19:02.283224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.099 [2024-11-19 13:19:02.283248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.099 [2024-11-19 13:19:02.283256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.099 [2024-11-19 13:19:02.288494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.099 [2024-11-19 13:19:02.288516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.099 [2024-11-19 13:19:02.288524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.099 [2024-11-19 13:19:02.293802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.099 [2024-11-19 13:19:02.293824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.099 [2024-11-19 13:19:02.293832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.099 [2024-11-19 13:19:02.299191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.099 [2024-11-19 13:19:02.299212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.099 [2024-11-19 13:19:02.299221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.099 [2024-11-19 13:19:02.304340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.099 [2024-11-19 13:19:02.304362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.099 [2024-11-19 13:19:02.304370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.099 [2024-11-19 13:19:02.309628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.099 [2024-11-19 13:19:02.309650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.099 [2024-11-19 13:19:02.309658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.099 [2024-11-19 13:19:02.314838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.099 [2024-11-19 13:19:02.314859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.099 [2024-11-19 13:19:02.314867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.099 [2024-11-19 13:19:02.320060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.099 [2024-11-19 13:19:02.320082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.099 [2024-11-19 13:19:02.320090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.325466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.325487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.325495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.330882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.330904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.330912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.336379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.336400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.336408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.341682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.341703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.341711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.346897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.346918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.346926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.352286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.352308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.352316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.357679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.357700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.357708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.362923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.362944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.362960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.368260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.368281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.368289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.373492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.373513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.373525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.378699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.378720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.378728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.383929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.383958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.383967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.389198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.389220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.389228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.394546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.394567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.394576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.399857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.399878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.399886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.405135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.405156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.405165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.410397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.410419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.410427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.415792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.415812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.415820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.421082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.421107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.421115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.426441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.426462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.426470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.431692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.431712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.431721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.436746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.436767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.436775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.442108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.442128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.442136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.447342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.447363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.447371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.452599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.452620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.452628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.457962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.457983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.457992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.100 [2024-11-19 13:19:02.463067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.100 [2024-11-19 13:19:02.463089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.100 [2024-11-19 13:19:02.463097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.101 [2024-11-19 13:19:02.468350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.101 [2024-11-19 13:19:02.468372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.101 [2024-11-19 13:19:02.468380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.361 [2024-11-19 13:19:02.473903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.361 [2024-11-19 13:19:02.473924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.361 [2024-11-19 13:19:02.473933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.362 [2024-11-19 13:19:02.479310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.362 [2024-11-19 13:19:02.479332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.362 [2024-11-19 13:19:02.479341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.362 [2024-11-19 13:19:02.484396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.362 [2024-11-19 13:19:02.484418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.362 [2024-11-19 13:19:02.484426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.362 [2024-11-19 13:19:02.489699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.362 [2024-11-19 13:19:02.489721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.362 [2024-11-19 13:19:02.489729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.362 [2024-11-19 13:19:02.495047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.362 [2024-11-19 13:19:02.495070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.362 [2024-11-19 13:19:02.495078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.362 [2024-11-19 13:19:02.500318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.362 [2024-11-19 13:19:02.500340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.362 [2024-11-19 13:19:02.500348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.362 [2024-11-19 13:19:02.505754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.362 [2024-11-19 13:19:02.505775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.362 [2024-11-19 13:19:02.505784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.362 [2024-11-19 13:19:02.510997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.362 [2024-11-19 13:19:02.511017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.362 [2024-11-19 13:19:02.511029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.362 [2024-11-19 13:19:02.516228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.362 [2024-11-19 13:19:02.516249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.362 [2024-11-19 13:19:02.516256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.362 [2024-11-19 13:19:02.521590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.362 [2024-11-19 13:19:02.521612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.362 [2024-11-19 13:19:02.521620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.362 [2024-11-19 13:19:02.527105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.362 [2024-11-19 13:19:02.527126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.362 [2024-11-19 13:19:02.527135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.362 [2024-11-19 13:19:02.532238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.362 [2024-11-19 13:19:02.532260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.362 [2024-11-19 13:19:02.532268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.362 [2024-11-19 13:19:02.537534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.362 [2024-11-19 13:19:02.537556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.362 [2024-11-19 13:19:02.537564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.362 [2024-11-19 13:19:02.542742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.362 [2024-11-19 13:19:02.542762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.362 [2024-11-19 13:19:02.542770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.362 [2024-11-19 13:19:02.548028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.362 [2024-11-19 13:19:02.548051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.362 [2024-11-19 13:19:02.548059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.362 [2024-11-19 13:19:02.553287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.362 [2024-11-19 13:19:02.553308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.362 [2024-11-19 13:19:02.553317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.362 [2024-11-19 13:19:02.558447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.362 [2024-11-19 13:19:02.558469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.362 [2024-11-19 13:19:02.558477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.362 [2024-11-19 13:19:02.563740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.362 [2024-11-19 13:19:02.563762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.362 [2024-11-19 13:19:02.563770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.362 [2024-11-19 13:19:02.569184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.362 [2024-11-19 13:19:02.569205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.362 [2024-11-19 13:19:02.569213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.362 [2024-11-19 13:19:02.574378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.362 [2024-11-19 13:19:02.574400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.362 [2024-11-19 13:19:02.574408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.362 [2024-11-19 13:19:02.579781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.362 [2024-11-19 13:19:02.579802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.362 [2024-11-19 13:19:02.579810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.362 [2024-11-19 13:19:02.585079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.362 [2024-11-19 13:19:02.585100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.362 [2024-11-19 13:19:02.585109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.363 [2024-11-19 13:19:02.590286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.363 [2024-11-19 13:19:02.590307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.363 [2024-11-19 13:19:02.590315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.363 [2024-11-19 13:19:02.595707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.363 [2024-11-19 13:19:02.595728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.363 [2024-11-19 13:19:02.595736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.363 [2024-11-19 13:19:02.600986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.363 [2024-11-19 13:19:02.601007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.363 [2024-11-19 13:19:02.601021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.363 [2024-11-19 13:19:02.606131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.363 [2024-11-19 13:19:02.606154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.363 [2024-11-19 13:19:02.606163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.363 [2024-11-19 13:19:02.611337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.363 [2024-11-19 13:19:02.611359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.363 [2024-11-19 13:19:02.611367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.363 [2024-11-19 13:19:02.616576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.363 [2024-11-19 13:19:02.616598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.363 [2024-11-19 13:19:02.616607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.363 [2024-11-19 13:19:02.621758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.363 [2024-11-19 13:19:02.621780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.363 [2024-11-19 13:19:02.621789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.363 [2024-11-19 13:19:02.627000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.363 [2024-11-19 13:19:02.627023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.363 [2024-11-19 13:19:02.627031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.363 [2024-11-19 13:19:02.632138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.363 [2024-11-19 13:19:02.632160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.363 [2024-11-19 13:19:02.632168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.363 [2024-11-19 13:19:02.637317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.363 [2024-11-19 13:19:02.637340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.363 [2024-11-19 13:19:02.637349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.363 [2024-11-19 13:19:02.642674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.363 [2024-11-19 13:19:02.642696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.363 [2024-11-19 13:19:02.642704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.363 [2024-11-19 13:19:02.647912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.363 [2024-11-19 13:19:02.647938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.363 [2024-11-19 13:19:02.647953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.363 [2024-11-19 13:19:02.653171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.363 [2024-11-19 13:19:02.653194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.363 [2024-11-19 13:19:02.653202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.363 [2024-11-19 13:19:02.658359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.363 [2024-11-19 13:19:02.658380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.363 [2024-11-19 13:19:02.658388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.363 [2024-11-19 13:19:02.663601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.363 [2024-11-19 13:19:02.663623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.363 [2024-11-19 13:19:02.663631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.363 [2024-11-19 13:19:02.668861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.363 [2024-11-19 13:19:02.668885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.363 [2024-11-19 13:19:02.668894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.363 [2024-11-19 13:19:02.674115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.363 [2024-11-19 13:19:02.674137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.363 [2024-11-19 13:19:02.674146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.363 [2024-11-19 13:19:02.679312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.363 [2024-11-19 13:19:02.679335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.363 [2024-11-19 13:19:02.679343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.363 [2024-11-19 13:19:02.684555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.363 [2024-11-19 13:19:02.684577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.363 [2024-11-19 13:19:02.684586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.363 [2024-11-19 13:19:02.689722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.363 [2024-11-19 13:19:02.689744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.363 [2024-11-19 13:19:02.689752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.363 [2024-11-19 13:19:02.694957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.363 [2024-11-19 13:19:02.694979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.363 [2024-11-19 13:19:02.694987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.363 [2024-11-19 13:19:02.700163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.363 [2024-11-19 13:19:02.700185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.363 [2024-11-19 13:19:02.700193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.363 [2024-11-19 13:19:02.706232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.363 [2024-11-19 13:19:02.706256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.363 [2024-11-19 13:19:02.706265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.364 [2024-11-19 13:19:02.712007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.364 [2024-11-19 13:19:02.712029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.364 [2024-11-19 13:19:02.712038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.364 [2024-11-19 13:19:02.717457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.364 [2024-11-19 13:19:02.717480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.364 [2024-11-19 13:19:02.717489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.364 [2024-11-19 13:19:02.722926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.364 [2024-11-19 13:19:02.722954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.364 [2024-11-19 13:19:02.722963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.364 5846.00 IOPS, 730.75 MiB/s [2024-11-19T12:19:02.741Z] [2024-11-19 13:19:02.729528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.364 [2024-11-19 13:19:02.729550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.364 [2024-11-19 13:19:02.729559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.364 [2024-11-19 13:19:02.735197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.364 [2024-11-19 13:19:02.735220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.364 [2024-11-19 13:19:02.735229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.625 [2024-11-19 13:19:02.740745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.625 [2024-11-19 13:19:02.740767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.625 [2024-11-19 13:19:02.740778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.625 [2024-11-19 13:19:02.746198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.625 [2024-11-19 13:19:02.746220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.625 [2024-11-19 13:19:02.746229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.625 [2024-11-19 13:19:02.751586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.625 [2024-11-19 13:19:02.751609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.625 [2024-11-19 13:19:02.751618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.625 [2024-11-19 13:19:02.756914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.625 [2024-11-19 13:19:02.756936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.625 [2024-11-19 13:19:02.756944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.625 [2024-11-19 13:19:02.762191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.625 [2024-11-19 13:19:02.762212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.625 [2024-11-19 13:19:02.762221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.625 [2024-11-19 13:19:02.767508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.625 [2024-11-19 13:19:02.767530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.625 [2024-11-19 13:19:02.767538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.625 [2024-11-19 13:19:02.772921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.625 [2024-11-19 13:19:02.772942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.625 [2024-11-19 13:19:02.772957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.625 [2024-11-19 13:19:02.778015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.625 [2024-11-19 13:19:02.778037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.625 [2024-11-19 13:19:02.778046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.625 [2024-11-19 13:19:02.783185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.625 [2024-11-19 13:19:02.783207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.625 [2024-11-19 13:19:02.783215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.625 [2024-11-19 13:19:02.788402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.625 [2024-11-19 13:19:02.788428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.625 [2024-11-19 13:19:02.788436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.625 [2024-11-19 13:19:02.793620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.625 [2024-11-19 13:19:02.793642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.625 [2024-11-19 13:19:02.793650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.625 [2024-11-19 13:19:02.798918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.625 [2024-11-19 13:19:02.798941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.625 [2024-11-19 13:19:02.798955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.625 [2024-11-19 13:19:02.804191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.625 [2024-11-19 13:19:02.804213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.625 [2024-11-19 13:19:02.804221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.625 [2024-11-19 13:19:02.809651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.625 [2024-11-19 13:19:02.809673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.625 [2024-11-19 13:19:02.809681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.625 [2024-11-19 13:19:02.814993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.625 [2024-11-19 13:19:02.815015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.625 [2024-11-19 13:19:02.815023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.625 [2024-11-19 13:19:02.820473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.625 [2024-11-19 13:19:02.820494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.625 [2024-11-19 13:19:02.820503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.625 [2024-11-19 13:19:02.825743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.625 [2024-11-19 13:19:02.825766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.625 [2024-11-19 13:19:02.825775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.626 [2024-11-19 13:19:02.830963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.626 [2024-11-19 13:19:02.830984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.626 [2024-11-19 13:19:02.830993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.626 [2024-11-19 13:19:02.836187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.626 [2024-11-19 13:19:02.836209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.626 [2024-11-19 13:19:02.836217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.626 [2024-11-19 13:19:02.841371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.626 [2024-11-19 13:19:02.841393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.626 [2024-11-19 13:19:02.841402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.626 [2024-11-19 13:19:02.846546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.626 [2024-11-19 13:19:02.846568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.626 [2024-11-19 13:19:02.846577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.626 [2024-11-19 13:19:02.851725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.626 [2024-11-19 13:19:02.851747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.626 [2024-11-19 13:19:02.851755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.626 [2024-11-19 13:19:02.856894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.626 [2024-11-19 13:19:02.856916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.626 [2024-11-19 13:19:02.856924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.626 [2024-11-19 13:19:02.862139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.626 [2024-11-19 13:19:02.862161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.626 [2024-11-19 13:19:02.862169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.626 [2024-11-19 13:19:02.867355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.626 [2024-11-19 13:19:02.867377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.626 [2024-11-19 13:19:02.867385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.626 [2024-11-19 13:19:02.872218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.626 [2024-11-19 13:19:02.872240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.626 [2024-11-19 13:19:02.872249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.626 [2024-11-19 13:19:02.877490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.626 [2024-11-19 13:19:02.877512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.626 [2024-11-19 13:19:02.877524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.626 [2024-11-19 13:19:02.882703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.626 [2024-11-19 13:19:02.882725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.626 [2024-11-19 13:19:02.882733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.626 [2024-11-19 13:19:02.887897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.626 [2024-11-19 13:19:02.887919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.626 [2024-11-19 13:19:02.887928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.626 [2024-11-19 13:19:02.893077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.626 [2024-11-19 13:19:02.893099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.626 [2024-11-19 13:19:02.893108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.626 [2024-11-19 13:19:02.898302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.626 [2024-11-19 13:19:02.898324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.626 [2024-11-19 13:19:02.898332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.626 [2024-11-19 13:19:02.903604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.626 [2024-11-19 13:19:02.903626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.626 [2024-11-19 13:19:02.903635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.626 [2024-11-19 13:19:02.909332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.626 [2024-11-19 13:19:02.909355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.626 [2024-11-19 13:19:02.909364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.626 [2024-11-19 13:19:02.914699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.626 [2024-11-19 13:19:02.914721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.626 [2024-11-19 13:19:02.914730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.626 [2024-11-19 13:19:02.919995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.626 [2024-11-19 13:19:02.920018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.626 [2024-11-19 13:19:02.920026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.626 [2024-11-19 13:19:02.925256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.626 [2024-11-19 13:19:02.925282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.626 [2024-11-19 13:19:02.925290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.626 [2024-11-19 13:19:02.930663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.626 [2024-11-19 13:19:02.930685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.626 [2024-11-19 13:19:02.930693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.626 [2024-11-19 13:19:02.936040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.626 [2024-11-19 13:19:02.936063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.626 [2024-11-19 13:19:02.936071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.626 [2024-11-19 13:19:02.941449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.626 [2024-11-19 13:19:02.941471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.626 [2024-11-19 13:19:02.941479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.626 [2024-11-19 13:19:02.946828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.626 [2024-11-19 13:19:02.946850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.626 [2024-11-19 13:19:02.946858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.626 [2024-11-19 13:19:02.952257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.626 [2024-11-19 13:19:02.952279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.626 [2024-11-19 13:19:02.952288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.627 [2024-11-19 13:19:02.957637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.627 [2024-11-19 13:19:02.957658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.627 [2024-11-19 13:19:02.957667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.627 [2024-11-19 13:19:02.963215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.627 [2024-11-19 13:19:02.963237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.627 [2024-11-19 13:19:02.963246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.627 [2024-11-19 13:19:02.968770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.627 [2024-11-19 13:19:02.968791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.627 [2024-11-19 13:19:02.968800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.627 [2024-11-19 13:19:02.974263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.627 [2024-11-19 13:19:02.974285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.627 [2024-11-19 13:19:02.974293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.627 [2024-11-19 13:19:02.979787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.627 [2024-11-19 13:19:02.979809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.627 [2024-11-19 13:19:02.979818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.627 [2024-11-19 13:19:02.985271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.627 [2024-11-19 13:19:02.985293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.627 [2024-11-19 13:19:02.985301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.627 [2024-11-19 13:19:02.990660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.627 [2024-11-19 13:19:02.990682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.627 [2024-11-19 13:19:02.990689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.627 [2024-11-19 13:19:02.996120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.627 [2024-11-19 13:19:02.996143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.627 [2024-11-19 13:19:02.996151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.888 [2024-11-19 13:19:03.001380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.888 [2024-11-19 13:19:03.001402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.888 [2024-11-19 13:19:03.001411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.888 [2024-11-19 13:19:03.006635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.888 [2024-11-19 13:19:03.006658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.888 [2024-11-19 13:19:03.006666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.888 [2024-11-19 13:19:03.011865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.888 [2024-11-19 13:19:03.011887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.888 [2024-11-19 13:19:03.011895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.888 [2024-11-19 13:19:03.017119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.888 [2024-11-19 13:19:03.017146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.888 [2024-11-19 13:19:03.017154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.888 [2024-11-19 13:19:03.022478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.888 [2024-11-19 13:19:03.022500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.888 [2024-11-19 13:19:03.022509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.888 [2024-11-19 13:19:03.027788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.888 [2024-11-19 13:19:03.027809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.888 [2024-11-19 13:19:03.027818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.888 [2024-11-19 13:19:03.033223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.888 [2024-11-19 13:19:03.033245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.888 [2024-11-19 13:19:03.033254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.888 [2024-11-19 13:19:03.038541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.888 [2024-11-19 13:19:03.038563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.888 [2024-11-19 13:19:03.038571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.888 [2024-11-19 13:19:03.043916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.888 [2024-11-19 13:19:03.043938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.888 [2024-11-19 13:19:03.043953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.888 [2024-11-19 13:19:03.049249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.888 [2024-11-19 13:19:03.049271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.888 [2024-11-19 13:19:03.049279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.888 [2024-11-19 13:19:03.054680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.888 [2024-11-19 13:19:03.054703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.888 [2024-11-19 13:19:03.054711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.888 [2024-11-19 13:19:03.060140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.888 [2024-11-19 13:19:03.060163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.888 [2024-11-19 13:19:03.060171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.888 [2024-11-19 13:19:03.065634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.888 [2024-11-19 13:19:03.065655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.888 [2024-11-19 13:19:03.065664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.888 [2024-11-19 13:19:03.071061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.888 [2024-11-19 13:19:03.071084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.888 [2024-11-19 13:19:03.071093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.888 [2024-11-19 13:19:03.076540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.888 [2024-11-19 13:19:03.076562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.888 [2024-11-19 13:19:03.076570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.888 [2024-11-19 13:19:03.081925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.888 [2024-11-19 13:19:03.081953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.888 [2024-11-19 13:19:03.081963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.888 [2024-11-19 13:19:03.087264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.888 [2024-11-19 13:19:03.087286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.888 [2024-11-19 13:19:03.087294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.888 [2024-11-19 13:19:03.092673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.888 [2024-11-19 13:19:03.092696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.888 [2024-11-19 13:19:03.092704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.888 [2024-11-19 13:19:03.097795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.888 [2024-11-19 13:19:03.097818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.888 [2024-11-19 13:19:03.097826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.888 [2024-11-19 13:19:03.103073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.888 [2024-11-19 13:19:03.103095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.888 [2024-11-19 13:19:03.103103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.888 [2024-11-19 13:19:03.108303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.888 [2024-11-19 13:19:03.108325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.888 [2024-11-19 13:19:03.108337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.888 [2024-11-19 13:19:03.113574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.113596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.113603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.118814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.118836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.118844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.124037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.124058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.124066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.129358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.129380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.129388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.134168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.134191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.134199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.139668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.139690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.139699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.143212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.143233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.143241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.147058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.147080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.147089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.151573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.151599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.151607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.156558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.156580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.156588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.162699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.162722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.162730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.170363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.170386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.170395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.177369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.177391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.177400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.183790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.183813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.183822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.190377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.190400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.190409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.196039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.196060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.196069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.201478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.201500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.201508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.206930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.206959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.206968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.212303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.212325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.212333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.217555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.217577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.217585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.222862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.222884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.222893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.228313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.228335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.228343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.233744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.233765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.233774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.239163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.239185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.239194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.245253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.245276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.245285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.250639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.250662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.250676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:59.889 [2024-11-19 13:19:03.255982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.889 [2024-11-19 13:19:03.256004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.889 [2024-11-19 13:19:03.256013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:59.890 [2024-11-19 13:19:03.261301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:26:59.890 [2024-11-19 13:19:03.261323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.890 [2024-11-19 13:19:03.261331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.150 [2024-11-19 13:19:03.266769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.150 [2024-11-19 13:19:03.266790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.150 [2024-11-19 13:19:03.266800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.150 [2024-11-19 13:19:03.272178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.150 [2024-11-19 13:19:03.272200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.150 [2024-11-19 13:19:03.272208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.150 [2024-11-19 13:19:03.277510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.150 [2024-11-19 13:19:03.277531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.150 [2024-11-19 13:19:03.277539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.150 [2024-11-19 13:19:03.282932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.150 [2024-11-19 13:19:03.282960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.150 [2024-11-19 13:19:03.282969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.150 [2024-11-19 13:19:03.288462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.150 [2024-11-19 13:19:03.288487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.150 [2024-11-19 13:19:03.288495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.150 [2024-11-19 13:19:03.293794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.150 [2024-11-19 13:19:03.293817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.150 [2024-11-19 13:19:03.293826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.150 [2024-11-19 13:19:03.299046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.150 [2024-11-19 13:19:03.299075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.150 [2024-11-19 13:19:03.299084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.150 [2024-11-19 13:19:03.304387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.150 [2024-11-19 13:19:03.304408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.150 [2024-11-19 13:19:03.304417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.150 [2024-11-19 13:19:03.309629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.150 [2024-11-19 13:19:03.309651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.150 [2024-11-19 13:19:03.309660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.150 [2024-11-19 13:19:03.315039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.150 [2024-11-19 13:19:03.315062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.150 [2024-11-19 13:19:03.315071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.150 [2024-11-19 13:19:03.320471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.150 [2024-11-19 13:19:03.320492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.320501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.325986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.326008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.326017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.331300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.331322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.331330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.336686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.336708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.336716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.342043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.342066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.342074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.347333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.347356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.347364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.352686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.352708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.352717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.358121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.358142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.358151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.363480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.363502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.363510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.368936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.368964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.368973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.374289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.374310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.374318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.379610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.379632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.379640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.384705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.384727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.384736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.389958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.389979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.389991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.395198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.395221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.395229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.400307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.400329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.400337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.405529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.405550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.405558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.410725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.410746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.410754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.415960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.415982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.415990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.421214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.421235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.421243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.426575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.426598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.426607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.431851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.431873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.431882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.437052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.437079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.437087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.442270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.442291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.442301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.447498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.447519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.447527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.452713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.452735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.452743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.457965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.457987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.457995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.463183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.463205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.463213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.468415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.151 [2024-11-19 13:19:03.468437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.151 [2024-11-19 13:19:03.468445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.151 [2024-11-19 13:19:03.473684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.152 [2024-11-19 13:19:03.473706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.152 [2024-11-19 13:19:03.473714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.152 [2024-11-19 13:19:03.478899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.152 [2024-11-19 13:19:03.478921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.152 [2024-11-19 13:19:03.478933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.152 [2024-11-19 13:19:03.484139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.152 [2024-11-19 13:19:03.484161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.152 [2024-11-19 13:19:03.484169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.152 [2024-11-19 13:19:03.489351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.152 [2024-11-19 13:19:03.489373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.152 [2024-11-19 13:19:03.489381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.152 [2024-11-19 13:19:03.494613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.152 [2024-11-19 13:19:03.494635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.152 [2024-11-19 13:19:03.494643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.152 [2024-11-19 13:19:03.499884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.152 [2024-11-19 13:19:03.499906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.152 [2024-11-19 13:19:03.499914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.152 [2024-11-19 13:19:03.505296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.152 [2024-11-19 13:19:03.505318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.152 [2024-11-19 13:19:03.505327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.152 [2024-11-19 13:19:03.510636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.152 [2024-11-19 13:19:03.510660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.152 [2024-11-19 13:19:03.510668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.152 [2024-11-19 13:19:03.515891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.152 [2024-11-19 13:19:03.515913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.152 [2024-11-19 13:19:03.515921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.152 [2024-11-19 13:19:03.521190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.152 [2024-11-19 13:19:03.521212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.152 [2024-11-19 13:19:03.521221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.412 [2024-11-19 13:19:03.524731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.412 [2024-11-19 13:19:03.524757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.412 [2024-11-19 13:19:03.524766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.412 [2024-11-19 13:19:03.529054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.412 [2024-11-19 13:19:03.529077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.412 [2024-11-19 13:19:03.529086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.412 [2024-11-19 13:19:03.534036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.412 [2024-11-19 13:19:03.534058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.412 [2024-11-19 13:19:03.534068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.412 [2024-11-19 13:19:03.539238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.412 [2024-11-19 13:19:03.539261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.412 [2024-11-19 13:19:03.539269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.412 [2024-11-19 13:19:03.544375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.412 [2024-11-19 13:19:03.544397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.412 [2024-11-19 13:19:03.544405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.412 [2024-11-19 13:19:03.549667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.412 [2024-11-19 13:19:03.549689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.412 [2024-11-19 13:19:03.549698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.555002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.555024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.555032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.560389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.560411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.560420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.566075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.566097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.566107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.571857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.571879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.571888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.577705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.577727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.577735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.582920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.582942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.582956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.588146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.588168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.588177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.593337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.593358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.593366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.598627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.598648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.598657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.603907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.603929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.603937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.609293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.609316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.609325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.614665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.614687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.614700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.620006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.620027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.620036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.625116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.625139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.625148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.630391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.630414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.630423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.635677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.635699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.635708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.640910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.640932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.640940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.646183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.646206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.646214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.651428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.651450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.651459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.656661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.656683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.656691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.661849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.661874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.661884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.667054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.667077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.667087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.672257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.672279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.672287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.677466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.677488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.677497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.682672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.682693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.682704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.687926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.687953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.687962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.413 [2024-11-19 13:19:03.693204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.413 [2024-11-19 13:19:03.693225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.413 [2024-11-19 13:19:03.693233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.414 [2024-11-19 13:19:03.698469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.414 [2024-11-19 13:19:03.698491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.414 [2024-11-19 13:19:03.698499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.414 [2024-11-19 13:19:03.703728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.414 [2024-11-19 13:19:03.703750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.414 [2024-11-19 13:19:03.703759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.414 [2024-11-19 13:19:03.708991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.414 [2024-11-19 13:19:03.709013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.414 [2024-11-19 13:19:03.709022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.414 [2024-11-19 13:19:03.714217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.414 [2024-11-19 13:19:03.714239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.414 [2024-11-19 13:19:03.714247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:00.414 [2024-11-19 13:19:03.719465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.414 [2024-11-19 13:19:03.719487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.414 [2024-11-19 13:19:03.719495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:00.414 [2024-11-19 13:19:03.724721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.414 [2024-11-19 13:19:03.724743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.414 [2024-11-19 13:19:03.724752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:00.414 [2024-11-19 13:19:03.730030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ae580) 00:27:00.414 [2024-11-19 13:19:03.730053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.414 [2024-11-19 13:19:03.730061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:00.414 5847.50 IOPS, 730.94 MiB/s 00:27:00.414 Latency(us) 00:27:00.414 [2024-11-19T12:19:03.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.414 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:00.414 nvme0n1 : 2.00 5847.12 730.89 0.00 0.00 2733.90 651.80 8263.23 00:27:00.414 [2024-11-19T12:19:03.791Z] =================================================================================================================== 00:27:00.414 [2024-11-19T12:19:03.791Z] Total : 5847.12 730.89 0.00 0.00 2733.90 651.80 8263.23 00:27:00.414 { 00:27:00.414 "results": [ 00:27:00.414 { 00:27:00.414 "job": "nvme0n1", 00:27:00.414 "core_mask": "0x2", 00:27:00.414 "workload": "randread", 00:27:00.414 "status": "finished", 00:27:00.414 "queue_depth": 16, 00:27:00.414 "io_size": 131072, 00:27:00.414 "runtime": 2.00321, 00:27:00.414 "iops": 5847.115379815396, 00:27:00.414 "mibps": 730.8894224769246, 00:27:00.414 "io_failed": 0, 00:27:00.414 "io_timeout": 0, 00:27:00.414 "avg_latency_us": 2733.898423676406, 00:27:00.414 "min_latency_us": 651.7982608695652, 00:27:00.414 "max_latency_us": 8263.234782608695 00:27:00.414 } 00:27:00.414 ], 00:27:00.414 "core_count": 1 00:27:00.414 } 00:27:00.414 13:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:00.414 13:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:00.414 13:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:00.414 | .driver_specific 00:27:00.414 | .nvme_error 00:27:00.414 | .status_code 00:27:00.414 | .command_transient_transport_error' 00:27:00.414 13:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:00.673 13:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 378 > 0 )) 00:27:00.673 13:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2992593 00:27:00.673 13:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2992593 ']' 00:27:00.673 13:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2992593 00:27:00.673 13:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:00.673 13:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:00.673 13:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2992593 00:27:00.673 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:00.674 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:00.674 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2992593' 00:27:00.674 killing process with pid 2992593 00:27:00.674 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2992593 00:27:00.674 Received shutdown signal, test time was about 2.000000 seconds 00:27:00.674 00:27:00.674 Latency(us) 00:27:00.674 [2024-11-19T12:19:04.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.674 [2024-11-19T12:19:04.051Z] =================================================================================================================== 00:27:00.674 [2024-11-19T12:19:04.051Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:00.674 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2992593 00:27:00.933 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:00.933 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:00.933 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:00.933 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:00.933 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:00.933 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:00.933 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2993138 00:27:00.933 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2993138 /var/tmp/bperf.sock 00:27:00.933 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2993138 ']' 00:27:00.933 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:00.933 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:00.933 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:00.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:00.933 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:00.933 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:00.933 [2024-11-19 13:19:04.197469] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:27:00.933 [2024-11-19 13:19:04.197517] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2993138 ] 00:27:00.933 [2024-11-19 13:19:04.255139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.933 [2024-11-19 13:19:04.299403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.192 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:01.192 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:01.192 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:01.192 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:01.453 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:01.453 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.453 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.453 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.453 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:01.453 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:01.712 nvme0n1 00:27:01.712 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:01.712 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.712 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.713 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.713 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:01.713 13:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:01.713 Running I/O for 2 seconds... 00:27:01.713 [2024-11-19 13:19:04.996429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166dfdc0 00:27:01.713 [2024-11-19 13:19:04.997397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.713 [2024-11-19 13:19:04.997425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:01.713 [2024-11-19 13:19:05.004910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f0788 00:27:01.713 [2024-11-19 13:19:05.005795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.713 [2024-11-19 13:19:05.005817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:01.713 [2024-11-19 13:19:05.014761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e6fa8 00:27:01.713 [2024-11-19 13:19:05.015697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.713 [2024-11-19 13:19:05.015718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:01.713 [2024-11-19 13:19:05.024150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e95a0 00:27:01.713 [2024-11-19 13:19:05.025091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.713 [2024-11-19 13:19:05.025111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:01.713 [2024-11-19 13:19:05.033310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ef270 00:27:01.713 [2024-11-19 13:19:05.033822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.713 [2024-11-19 13:19:05.033842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:01.713 [2024-11-19 13:19:05.043003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e23b8 00:27:01.713 [2024-11-19 13:19:05.043637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.713 [2024-11-19 13:19:05.043657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:01.713 [2024-11-19 13:19:05.052682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f6890 00:27:01.713 [2024-11-19 13:19:05.053433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.713 [2024-11-19 13:19:05.053452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:01.713 [2024-11-19 13:19:05.061380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e6300 00:27:01.713 [2024-11-19 13:19:05.062658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.713 [2024-11-19 13:19:05.062677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.713 [2024-11-19 13:19:05.069264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e1710 00:27:01.713 [2024-11-19 13:19:05.069978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.713 [2024-11-19 13:19:05.069997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:01.713 [2024-11-19 13:19:05.078933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e3060 00:27:01.713 [2024-11-19 13:19:05.079773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.713 [2024-11-19 13:19:05.079792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.088666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e23b8 00:27:01.974 [2024-11-19 13:19:05.089667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.089687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.099013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f31b8 00:27:01.974 [2024-11-19 13:19:05.100108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.100132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.108215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ed0b0 00:27:01.974 [2024-11-19 13:19:05.109310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.109331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.117420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e5ec8 00:27:01.974 [2024-11-19 13:19:05.118508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.118528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.126625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e4de8 00:27:01.974 [2024-11-19 13:19:05.127721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.127741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.135849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166eff18 00:27:01.974 [2024-11-19 13:19:05.136932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.136955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.144448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166dfdc0 00:27:01.974 [2024-11-19 13:19:05.145493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.145512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.153145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e9168 00:27:01.974 [2024-11-19 13:19:05.153857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.153876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.162441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e3498 00:27:01.974 [2024-11-19 13:19:05.162944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.162970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.173018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fc128 00:27:01.974 [2024-11-19 13:19:05.174311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.174329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.182629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166de8a8 00:27:01.974 [2024-11-19 13:19:05.184045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.184063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.192224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f57b0 00:27:01.974 [2024-11-19 13:19:05.193751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.193770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.198681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f7538 00:27:01.974 [2024-11-19 13:19:05.199397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.199417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.207403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fd640 00:27:01.974 [2024-11-19 13:19:05.208114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.208134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.216990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fc560 00:27:01.974 [2024-11-19 13:19:05.217815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.217834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.227281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f6458 00:27:01.974 [2024-11-19 13:19:05.228236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.228256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.236524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f4b08 00:27:01.974 [2024-11-19 13:19:05.237481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.237500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.245758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f81e0 00:27:01.974 [2024-11-19 13:19:05.246734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.246753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.255231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e6b70 00:27:01.974 [2024-11-19 13:19:05.256206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.256225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.264438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e9e10 00:27:01.974 [2024-11-19 13:19:05.265426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.265445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.273612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fb480 00:27:01.974 [2024-11-19 13:19:05.274568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.274586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.282792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e73e0 00:27:01.974 [2024-11-19 13:19:05.283774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.283793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.291975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e84c0 00:27:01.974 [2024-11-19 13:19:05.292927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.292945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.301162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e27f0 00:27:01.974 [2024-11-19 13:19:05.302120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.302141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.310396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e1710 00:27:01.974 [2024-11-19 13:19:05.311350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.974 [2024-11-19 13:19:05.311368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:01.974 [2024-11-19 13:19:05.319569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f20d8 00:27:01.975 [2024-11-19 13:19:05.320526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.975 [2024-11-19 13:19:05.320545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:01.975 [2024-11-19 13:19:05.328753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f35f0 00:27:01.975 [2024-11-19 13:19:05.329762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.975 [2024-11-19 13:19:05.329782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:01.975 [2024-11-19 13:19:05.337980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fc560 00:27:01.975 [2024-11-19 13:19:05.338932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:01.975 [2024-11-19 13:19:05.338962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:02.235 [2024-11-19 13:19:05.348425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f9b30 00:27:02.235 [2024-11-19 13:19:05.349864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.235 [2024-11-19 13:19:05.349882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:02.235 [2024-11-19 13:19:05.358188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e99d8 00:27:02.235 [2024-11-19 13:19:05.359711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.235 [2024-11-19 13:19:05.359729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.235 [2024-11-19 13:19:05.364651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ef6a8 00:27:02.235 [2024-11-19 13:19:05.365363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.235 [2024-11-19 13:19:05.365382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.235 [2024-11-19 13:19:05.374613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fa7d8 00:27:02.235 [2024-11-19 13:19:05.375915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.235 [2024-11-19 13:19:05.375933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:02.235 [2024-11-19 13:19:05.382487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f7100 00:27:02.235 [2024-11-19 13:19:05.383182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.235 [2024-11-19 13:19:05.383201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:02.235 [2024-11-19 13:19:05.392725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166feb58 00:27:02.235 [2024-11-19 13:19:05.393569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.235 [2024-11-19 13:19:05.393588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:02.235 [2024-11-19 13:19:05.401895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ee190 00:27:02.235 [2024-11-19 13:19:05.402748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.235 [2024-11-19 13:19:05.402768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:02.235 [2024-11-19 13:19:05.411065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ef270 00:27:02.235 [2024-11-19 13:19:05.411904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.235 [2024-11-19 13:19:05.411923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:02.235 [2024-11-19 13:19:05.420479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f20d8 00:27:02.235 [2024-11-19 13:19:05.421095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.235 [2024-11-19 13:19:05.421114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:02.235 [2024-11-19 13:19:05.430104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f6cc8 00:27:02.235 [2024-11-19 13:19:05.430853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.235 [2024-11-19 13:19:05.430873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:02.235 [2024-11-19 13:19:05.438792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ec840 00:27:02.235 [2024-11-19 13:19:05.440070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.235 [2024-11-19 13:19:05.440090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:02.235 [2024-11-19 13:19:05.446671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f2d80 00:27:02.235 [2024-11-19 13:19:05.447288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.235 [2024-11-19 13:19:05.447308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:02.235 [2024-11-19 13:19:05.456046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166eee38 00:27:02.235 [2024-11-19 13:19:05.456662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.235 [2024-11-19 13:19:05.456681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.235 [2024-11-19 13:19:05.467044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f2510 00:27:02.235 [2024-11-19 13:19:05.468017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.235 [2024-11-19 13:19:05.468036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:02.235 [2024-11-19 13:19:05.476422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ea680 00:27:02.235 [2024-11-19 13:19:05.477415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.235 [2024-11-19 13:19:05.477435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:02.235 [2024-11-19 13:19:05.485630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ea680 00:27:02.235 [2024-11-19 13:19:05.486649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.235 [2024-11-19 13:19:05.486669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:02.235 [2024-11-19 13:19:05.496277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fd640 00:27:02.235 [2024-11-19 13:19:05.497843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.235 [2024-11-19 13:19:05.497863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:02.235 [2024-11-19 13:19:05.502942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f31b8 00:27:02.235 [2024-11-19 13:19:05.503562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.235 [2024-11-19 13:19:05.503582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:02.235 [2024-11-19 13:19:05.512505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fc998 00:27:02.236 [2024-11-19 13:19:05.513105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.236 [2024-11-19 13:19:05.513126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:02.236 [2024-11-19 13:19:05.521865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166efae0 00:27:02.236 [2024-11-19 13:19:05.522456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.236 [2024-11-19 13:19:05.522476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:02.236 [2024-11-19 13:19:05.530978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166efae0 00:27:02.236 [2024-11-19 13:19:05.531564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.236 [2024-11-19 13:19:05.531584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:02.236 [2024-11-19 13:19:05.540682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f3e60 00:27:02.236 [2024-11-19 13:19:05.541612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.236 [2024-11-19 13:19:05.541630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:02.236 [2024-11-19 13:19:05.551920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e27f0 00:27:02.236 [2024-11-19 13:19:05.553240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.236 [2024-11-19 13:19:05.553260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:02.236 [2024-11-19 13:19:05.558476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ebfd0 00:27:02.236 [2024-11-19 13:19:05.559094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.236 [2024-11-19 13:19:05.559113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:02.236 [2024-11-19 13:19:05.567627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ef270 00:27:02.236 [2024-11-19 13:19:05.568306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.236 [2024-11-19 13:19:05.568326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:02.236 [2024-11-19 13:19:05.578855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e2c28 00:27:02.236 [2024-11-19 13:19:05.579933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.236 [2024-11-19 13:19:05.579960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:02.236 [2024-11-19 13:19:05.588181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f9b30 00:27:02.236 [2024-11-19 13:19:05.589152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.236 [2024-11-19 13:19:05.589172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:02.236 [2024-11-19 13:19:05.597860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f3a28 00:27:02.236 [2024-11-19 13:19:05.599202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.236 [2024-11-19 13:19:05.599222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.236 [2024-11-19 13:19:05.606529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ef6a8 00:27:02.236 [2024-11-19 13:19:05.607565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.236 [2024-11-19 13:19:05.607585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.496 [2024-11-19 13:19:05.615875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f4b08 00:27:02.496 [2024-11-19 13:19:05.616877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.496 [2024-11-19 13:19:05.616897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:02.496 [2024-11-19 13:19:05.626994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e0630 00:27:02.496 [2024-11-19 13:19:05.628491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.496 [2024-11-19 13:19:05.628511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:02.496 [2024-11-19 13:19:05.634998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e49b0 00:27:02.496 [2024-11-19 13:19:05.636007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.496 [2024-11-19 13:19:05.636027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:02.496 [2024-11-19 13:19:05.643735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166dfdc0 00:27:02.496 [2024-11-19 13:19:05.644723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.496 [2024-11-19 13:19:05.644742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:02.496 [2024-11-19 13:19:05.652710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ff3c8 00:27:02.496 [2024-11-19 13:19:05.653501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.496 [2024-11-19 13:19:05.653521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:02.496 [2024-11-19 13:19:05.662099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fd208 00:27:02.496 [2024-11-19 13:19:05.662886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.496 [2024-11-19 13:19:05.662905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:02.496 [2024-11-19 13:19:05.671793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fc560 00:27:02.496 [2024-11-19 13:19:05.672446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.496 [2024-11-19 13:19:05.672466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:02.496 [2024-11-19 13:19:05.681240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fac10 00:27:02.496 [2024-11-19 13:19:05.682152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.496 [2024-11-19 13:19:05.682172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:02.496 [2024-11-19 13:19:05.690660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ec408 00:27:02.496 [2024-11-19 13:19:05.691555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.496 [2024-11-19 13:19:05.691574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:02.496 [2024-11-19 13:19:05.700720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166de470 00:27:02.496 [2024-11-19 13:19:05.702030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.496 [2024-11-19 13:19:05.702050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:02.496 [2024-11-19 13:19:05.710186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ef6a8 00:27:02.496 [2024-11-19 13:19:05.711085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.496 [2024-11-19 13:19:05.711105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:02.496 [2024-11-19 13:19:05.718663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ec408 00:27:02.496 [2024-11-19 13:19:05.719644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.496 [2024-11-19 13:19:05.719663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:02.496 [2024-11-19 13:19:05.728048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fda78 00:27:02.496 [2024-11-19 13:19:05.728562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.496 [2024-11-19 13:19:05.728582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:02.496 [2024-11-19 13:19:05.737567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e9168 00:27:02.496 [2024-11-19 13:19:05.738325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.497 [2024-11-19 13:19:05.738345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:02.497 [2024-11-19 13:19:05.745966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f0350 00:27:02.497 [2024-11-19 13:19:05.746789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.497 [2024-11-19 13:19:05.746808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:02.497 [2024-11-19 13:19:05.755303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f8e88 00:27:02.497 [2024-11-19 13:19:05.756113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.497 [2024-11-19 13:19:05.756135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:02.497 [2024-11-19 13:19:05.767223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f2948 00:27:02.497 [2024-11-19 13:19:05.768747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.497 [2024-11-19 13:19:05.768767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:02.497 [2024-11-19 13:19:05.773694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e84c0 00:27:02.497 [2024-11-19 13:19:05.774336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.497 [2024-11-19 13:19:05.774355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.497 [2024-11-19 13:19:05.784216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e3060 00:27:02.497 [2024-11-19 13:19:05.784969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.497 [2024-11-19 13:19:05.784989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.497 [2024-11-19 13:19:05.792796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e1710 00:27:02.497 [2024-11-19 13:19:05.794084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.497 [2024-11-19 13:19:05.794103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:02.497 [2024-11-19 13:19:05.800672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fac10 00:27:02.497 [2024-11-19 13:19:05.801291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.497 [2024-11-19 13:19:05.801311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:02.497 [2024-11-19 13:19:05.811601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fac10 00:27:02.497 [2024-11-19 13:19:05.812703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.497 [2024-11-19 13:19:05.812724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:02.497 [2024-11-19 13:19:05.819541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fd208 00:27:02.497 [2024-11-19 13:19:05.820169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.497 [2024-11-19 13:19:05.820192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:02.497 [2024-11-19 13:19:05.829014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fb480 00:27:02.497 [2024-11-19 13:19:05.829734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.497 [2024-11-19 13:19:05.829754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:02.497 [2024-11-19 13:19:05.838925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f31b8 00:27:02.497 [2024-11-19 13:19:05.839903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.497 [2024-11-19 13:19:05.839923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:02.497 [2024-11-19 13:19:05.848255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e95a0 00:27:02.497 [2024-11-19 13:19:05.849221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.497 [2024-11-19 13:19:05.849240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:02.497 [2024-11-19 13:19:05.858215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fef90 00:27:02.497 [2024-11-19 13:19:05.859293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.497 [2024-11-19 13:19:05.859312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:02.497 [2024-11-19 13:19:05.866826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e6738 00:27:02.497 [2024-11-19 13:19:05.867792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.497 [2024-11-19 13:19:05.867812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:02.758 [2024-11-19 13:19:05.875999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f4b08 00:27:02.758 [2024-11-19 13:19:05.876964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.758 [2024-11-19 13:19:05.876984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:02.758 [2024-11-19 13:19:05.885356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e1b48 00:27:02.758 [2024-11-19 13:19:05.886315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.758 [2024-11-19 13:19:05.886334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:02.758 [2024-11-19 13:19:05.894577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ee190 00:27:02.758 [2024-11-19 13:19:05.895517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.758 [2024-11-19 13:19:05.895536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:02.758 [2024-11-19 13:19:05.902978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f20d8 00:27:02.758 [2024-11-19 13:19:05.903921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.758 [2024-11-19 13:19:05.903940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:02.758 [2024-11-19 13:19:05.913883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f20d8 00:27:02.758 [2024-11-19 13:19:05.915285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.758 [2024-11-19 13:19:05.915305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:02.758 [2024-11-19 13:19:05.921817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ef6a8 00:27:02.758 [2024-11-19 13:19:05.922744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.758 [2024-11-19 13:19:05.922764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:02.758 [2024-11-19 13:19:05.931276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e27f0 00:27:02.758 [2024-11-19 13:19:05.932338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.758 [2024-11-19 13:19:05.932357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:02.758 [2024-11-19 13:19:05.939049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e38d0 00:27:02.758 [2024-11-19 13:19:05.939483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.758 [2024-11-19 13:19:05.939502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:02.758 [2024-11-19 13:19:05.948377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166eee38 00:27:02.758 [2024-11-19 13:19:05.949066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.758 [2024-11-19 13:19:05.949085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:02.758 [2024-11-19 13:19:05.959902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ff3c8 00:27:02.758 [2024-11-19 13:19:05.961410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.758 [2024-11-19 13:19:05.961429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:02.758 [2024-11-19 13:19:05.966384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ebfd0 00:27:02.758 [2024-11-19 13:19:05.966975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.758 [2024-11-19 13:19:05.966994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:02.758 [2024-11-19 13:19:05.976554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f1868 00:27:02.758 [2024-11-19 13:19:05.977488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.758 [2024-11-19 13:19:05.977507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:02.758 [2024-11-19 13:19:05.985266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e8088 00:27:02.758 [2024-11-19 13:19:05.986099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.758 [2024-11-19 13:19:05.986117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:02.758 27597.00 IOPS, 107.80 MiB/s [2024-11-19T12:19:06.135Z] [2024-11-19 13:19:05.994213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166efae0 00:27:02.758 [2024-11-19 13:19:05.995045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.758 [2024-11-19 13:19:05.995064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:02.758 [2024-11-19 13:19:06.005597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ed920 00:27:02.758 [2024-11-19 13:19:06.007001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.758 [2024-11-19 13:19:06.007020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:02.758 [2024-11-19 13:19:06.014903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fb480 00:27:02.758 [2024-11-19 13:19:06.016306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.758 [2024-11-19 13:19:06.016324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:02.758 [2024-11-19 13:19:06.022831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ec840 00:27:02.758 [2024-11-19 13:19:06.023534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.758 [2024-11-19 13:19:06.023553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:02.758 [2024-11-19 13:19:06.031236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ed0b0 00:27:02.758 [2024-11-19 13:19:06.032159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.758 [2024-11-19 13:19:06.032178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:02.758 [2024-11-19 13:19:06.040520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166df550 00:27:02.758 [2024-11-19 13:19:06.041313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.758 [2024-11-19 13:19:06.041332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:02.758 [2024-11-19 13:19:06.051697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fc998 00:27:02.758 [2024-11-19 13:19:06.052935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.758 [2024-11-19 13:19:06.052966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:02.759 [2024-11-19 13:19:06.059626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f3a28 00:27:02.759 [2024-11-19 13:19:06.060440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.759 [2024-11-19 13:19:06.060463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:02.759 [2024-11-19 13:19:06.069261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e8d30 00:27:02.759 [2024-11-19 13:19:06.070317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.759 [2024-11-19 13:19:06.070336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:02.759 [2024-11-19 13:19:06.077197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e9e10 00:27:02.759 [2024-11-19 13:19:06.077772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.759 [2024-11-19 13:19:06.077791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:02.759 [2024-11-19 13:19:06.086648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fda78 00:27:02.759 [2024-11-19 13:19:06.087334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.759 [2024-11-19 13:19:06.087355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:02.759 [2024-11-19 13:19:06.096360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e0630 00:27:02.759 [2024-11-19 13:19:06.097308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.759 [2024-11-19 13:19:06.097329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:02.759 [2024-11-19 13:19:06.105977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fbcf0 00:27:02.759 [2024-11-19 13:19:06.107031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.759 [2024-11-19 13:19:06.107050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:02.759 [2024-11-19 13:19:06.115002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f8618 00:27:02.759 [2024-11-19 13:19:06.116034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.759 [2024-11-19 13:19:06.116053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:02.759 [2024-11-19 13:19:06.122926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f35f0 00:27:02.759 [2024-11-19 13:19:06.123490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.759 [2024-11-19 13:19:06.123510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:03.019 [2024-11-19 13:19:06.132443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f7da8 00:27:03.019 [2024-11-19 13:19:06.133152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.019 [2024-11-19 13:19:06.133173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:03.019 [2024-11-19 13:19:06.143015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e3d08 00:27:03.019 [2024-11-19 13:19:06.144152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.019 [2024-11-19 13:19:06.144171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:03.019 [2024-11-19 13:19:06.152308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e38d0 00:27:03.019 [2024-11-19 13:19:06.153450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.019 [2024-11-19 13:19:06.153469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:03.019 [2024-11-19 13:19:06.159770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e0630 00:27:03.019 [2024-11-19 13:19:06.160450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.019 [2024-11-19 13:19:06.160469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:03.019 [2024-11-19 13:19:06.170035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fd208 00:27:03.019 [2024-11-19 13:19:06.170687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.019 [2024-11-19 13:19:06.170706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:03.019 [2024-11-19 13:19:06.178686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f57b0 00:27:03.019 [2024-11-19 13:19:06.179322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.019 [2024-11-19 13:19:06.179342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:03.019 [2024-11-19 13:19:06.189152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fa3a0 00:27:03.019 [2024-11-19 13:19:06.190301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.019 [2024-11-19 13:19:06.190320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:03.019 [2024-11-19 13:19:06.197997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fda78 00:27:03.019 [2024-11-19 13:19:06.199165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.019 [2024-11-19 13:19:06.199184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:03.019 [2024-11-19 13:19:06.207301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e7818 00:27:03.019 [2024-11-19 13:19:06.208442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.019 [2024-11-19 13:19:06.208462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:03.019 [2024-11-19 13:19:06.214886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f1430 00:27:03.019 [2024-11-19 13:19:06.215668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.019 [2024-11-19 13:19:06.215689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:03.019 [2024-11-19 13:19:06.225481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fb480 00:27:03.019 [2024-11-19 13:19:06.226522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.019 [2024-11-19 13:19:06.226541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:03.019 [2024-11-19 13:19:06.234057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f0ff8 00:27:03.019 [2024-11-19 13:19:06.235632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.019 [2024-11-19 13:19:06.235651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:03.019 [2024-11-19 13:19:06.243755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166dfdc0 00:27:03.019 [2024-11-19 13:19:06.244688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.019 [2024-11-19 13:19:06.244708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:03.019 [2024-11-19 13:19:06.253372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e73e0 00:27:03.019 [2024-11-19 13:19:06.254429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.019 [2024-11-19 13:19:06.254448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:03.019 [2024-11-19 13:19:06.263170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e49b0 00:27:03.019 [2024-11-19 13:19:06.264462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.020 [2024-11-19 13:19:06.264482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:03.020 [2024-11-19 13:19:06.270669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e0630 00:27:03.020 [2024-11-19 13:19:06.271520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.020 [2024-11-19 13:19:06.271538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:03.020 [2024-11-19 13:19:06.280659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e6300 00:27:03.020 [2024-11-19 13:19:06.281343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.020 [2024-11-19 13:19:06.281363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:03.020 [2024-11-19 13:19:06.289324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e12d8 00:27:03.020 [2024-11-19 13:19:06.289910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.020 [2024-11-19 13:19:06.289929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:03.020 [2024-11-19 13:19:06.298041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f5378 00:27:03.020 [2024-11-19 13:19:06.298564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.020 [2024-11-19 13:19:06.298586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:03.020 [2024-11-19 13:19:06.307394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e5ec8 00:27:03.020 [2024-11-19 13:19:06.308215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.020 [2024-11-19 13:19:06.308234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:03.020 [2024-11-19 13:19:06.317302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e6b70 00:27:03.020 [2024-11-19 13:19:06.317986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.020 [2024-11-19 13:19:06.318006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:03.020 [2024-11-19 13:19:06.326575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e7818 00:27:03.020 [2024-11-19 13:19:06.327538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.020 [2024-11-19 13:19:06.327557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:03.020 [2024-11-19 13:19:06.336159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f57b0 00:27:03.020 [2024-11-19 13:19:06.337359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.020 [2024-11-19 13:19:06.337378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:03.020 [2024-11-19 13:19:06.344496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166df118 00:27:03.020 [2024-11-19 13:19:06.345776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.020 [2024-11-19 13:19:06.345795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:03.020 [2024-11-19 13:19:06.354093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166edd58 00:27:03.020 [2024-11-19 13:19:06.354807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.020 [2024-11-19 13:19:06.354826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:03.020 [2024-11-19 13:19:06.363363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f0ff8 00:27:03.020 [2024-11-19 13:19:06.364283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.020 [2024-11-19 13:19:06.364301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:03.020 [2024-11-19 13:19:06.373173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e1f80 00:27:03.020 [2024-11-19 13:19:06.374353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.020 [2024-11-19 13:19:06.374371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:03.020 [2024-11-19 13:19:06.380469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e3060 00:27:03.020 [2024-11-19 13:19:06.381197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.020 [2024-11-19 13:19:06.381215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:03.020 [2024-11-19 13:19:06.391820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f57b0 00:27:03.020 [2024-11-19 13:19:06.393121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.020 [2024-11-19 13:19:06.393139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:03.280 [2024-11-19 13:19:06.399189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e3d08 00:27:03.280 [2024-11-19 13:19:06.400022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.280 [2024-11-19 13:19:06.400043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:03.280 [2024-11-19 13:19:06.410244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fa3a0 00:27:03.280 [2024-11-19 13:19:06.411614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.281 [2024-11-19 13:19:06.411633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:03.281 [2024-11-19 13:19:06.417529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e6fa8 00:27:03.281 [2024-11-19 13:19:06.418428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.281 [2024-11-19 13:19:06.418446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:03.281 [2024-11-19 13:19:06.427094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ea248 00:27:03.281 [2024-11-19 13:19:06.428105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.281 [2024-11-19 13:19:06.428123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:03.281 [2024-11-19 13:19:06.436765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f92c0 00:27:03.281 [2024-11-19 13:19:06.437894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.281 [2024-11-19 13:19:06.437913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:03.281 [2024-11-19 13:19:06.445246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f1868 00:27:03.281 [2024-11-19 13:19:06.445926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.281 [2024-11-19 13:19:06.445944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:03.281 [2024-11-19 13:19:06.454821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166eb328 00:27:03.281 [2024-11-19 13:19:06.455826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.281 [2024-11-19 13:19:06.455844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:03.281 [2024-11-19 13:19:06.464411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166eea00 00:27:03.281 [2024-11-19 13:19:06.465556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.281 [2024-11-19 13:19:06.465575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:03.281 [2024-11-19 13:19:06.473091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e3060 00:27:03.281 [2024-11-19 13:19:06.473958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.281 [2024-11-19 13:19:06.473978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:03.281 [2024-11-19 13:19:06.482520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166eff18 00:27:03.281 [2024-11-19 13:19:06.483474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.281 [2024-11-19 13:19:06.483493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:03.281 [2024-11-19 13:19:06.493880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e6fa8 00:27:03.281 [2024-11-19 13:19:06.495303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.281 [2024-11-19 13:19:06.495322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:03.281 [2024-11-19 13:19:06.502410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f20d8 00:27:03.281 [2024-11-19 13:19:06.503373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.281 [2024-11-19 13:19:06.503392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:03.281 [2024-11-19 13:19:06.511744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166eaab8 00:27:03.281 [2024-11-19 13:19:06.512939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.281 [2024-11-19 13:19:06.512961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:03.281 [2024-11-19 13:19:06.518588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f5be8 00:27:03.281 [2024-11-19 13:19:06.519204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.281 [2024-11-19 13:19:06.519225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:03.281 [2024-11-19 13:19:06.530067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e2c28 00:27:03.281 [2024-11-19 13:19:06.531157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.281 [2024-11-19 13:19:06.531177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:03.281 [2024-11-19 13:19:06.539181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f7da8 00:27:03.281 [2024-11-19 13:19:06.540034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.281 [2024-11-19 13:19:06.540057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:03.281 [2024-11-19 13:19:06.548490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f6cc8 00:27:03.281 [2024-11-19 13:19:06.549459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.281 [2024-11-19 13:19:06.549479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:03.281 [2024-11-19 13:19:06.558071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f4f40 00:27:03.281 [2024-11-19 13:19:06.559167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.281 [2024-11-19 13:19:06.559187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.281 [2024-11-19 13:19:06.565490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f9f68 00:27:03.281 [2024-11-19 13:19:06.566119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.281 [2024-11-19 13:19:06.566139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:03.281 [2024-11-19 13:19:06.575034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f8e88 00:27:03.281 [2024-11-19 13:19:06.575511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.281 [2024-11-19 13:19:06.575531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:03.281 [2024-11-19 13:19:06.584666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e5ec8 00:27:03.281 [2024-11-19 13:19:06.585259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.281 [2024-11-19 13:19:06.585278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:03.281 [2024-11-19 13:19:06.594245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f20d8 00:27:03.281 [2024-11-19 13:19:06.594970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.281 [2024-11-19 13:19:06.594990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.281 [2024-11-19 13:19:06.604716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166df550 00:27:03.281 [2024-11-19 13:19:06.606258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.281 [2024-11-19 13:19:06.606278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.281 [2024-11-19 13:19:06.611251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fc998 00:27:03.281 [2024-11-19 13:19:06.611955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.282 [2024-11-19 13:19:06.611974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:03.282 [2024-11-19 13:19:06.620387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166df988 00:27:03.282 [2024-11-19 13:19:06.621203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.282 [2024-11-19 13:19:06.621225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.282 [2024-11-19 13:19:06.631667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f5be8 00:27:03.282 [2024-11-19 13:19:06.632872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.282 [2024-11-19 13:19:06.632892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.282 [2024-11-19 13:19:06.640820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fe720 00:27:03.282 [2024-11-19 13:19:06.642086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.282 [2024-11-19 13:19:06.642105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:03.282 [2024-11-19 13:19:06.650424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e38d0 00:27:03.282 [2024-11-19 13:19:06.651826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.282 [2024-11-19 13:19:06.651846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:03.542 [2024-11-19 13:19:06.660168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e9168 00:27:03.542 [2024-11-19 13:19:06.661712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.542 [2024-11-19 13:19:06.661731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:03.542 [2024-11-19 13:19:06.666660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f96f8 00:27:03.542 [2024-11-19 13:19:06.667383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.542 [2024-11-19 13:19:06.667403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:03.542 [2024-11-19 13:19:06.675970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e4140 00:27:03.542 [2024-11-19 13:19:06.676672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.542 [2024-11-19 13:19:06.676691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:03.542 [2024-11-19 13:19:06.685149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166edd58 00:27:03.542 [2024-11-19 13:19:06.685858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.542 [2024-11-19 13:19:06.685877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:03.542 [2024-11-19 13:19:06.694325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f6020 00:27:03.542 [2024-11-19 13:19:06.695037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.542 [2024-11-19 13:19:06.695057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:03.542 [2024-11-19 13:19:06.703684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e3d08 00:27:03.542 [2024-11-19 13:19:06.704423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.543 [2024-11-19 13:19:06.704443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:03.543 [2024-11-19 13:19:06.713320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fef90 00:27:03.543 [2024-11-19 13:19:06.714169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.543 [2024-11-19 13:19:06.714188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:03.543 [2024-11-19 13:19:06.722513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166feb58 00:27:03.543 [2024-11-19 13:19:06.723422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.543 [2024-11-19 13:19:06.723440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:03.543 [2024-11-19 13:19:06.732159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f8e88 00:27:03.543 [2024-11-19 13:19:06.733162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.543 [2024-11-19 13:19:06.733181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:03.543 [2024-11-19 13:19:06.742090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f8e88 00:27:03.543 [2024-11-19 13:19:06.743160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.543 [2024-11-19 13:19:06.743180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.543 [2024-11-19 13:19:06.752411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f8e88 00:27:03.543 [2024-11-19 13:19:06.753913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.543 [2024-11-19 13:19:06.753933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.543 [2024-11-19 13:19:06.758882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ee5c8 00:27:03.543 [2024-11-19 13:19:06.759569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.543 [2024-11-19 13:19:06.759588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:03.543 [2024-11-19 13:19:06.768857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f0bc0 00:27:03.543 [2024-11-19 13:19:06.770143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.543 [2024-11-19 13:19:06.770162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:03.543 [2024-11-19 13:19:06.778715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166df550 00:27:03.543 [2024-11-19 13:19:06.779469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.543 [2024-11-19 13:19:06.779488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:03.543 [2024-11-19 13:19:06.787623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fd208 00:27:03.543 [2024-11-19 13:19:06.788574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.543 [2024-11-19 13:19:06.788593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:03.543 [2024-11-19 13:19:06.796943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166ec840 00:27:03.543 [2024-11-19 13:19:06.797763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.543 [2024-11-19 13:19:06.797782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:03.543 [2024-11-19 13:19:06.806412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f6458 00:27:03.543 [2024-11-19 13:19:06.807389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.543 [2024-11-19 13:19:06.807408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:03.543 [2024-11-19 13:19:06.816622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f6458 00:27:03.543 [2024-11-19 13:19:06.818126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.543 [2024-11-19 13:19:06.818146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:03.543 [2024-11-19 13:19:06.823094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e7818 00:27:03.543 [2024-11-19 13:19:06.823747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.543 [2024-11-19 13:19:06.823766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:03.543 [2024-11-19 13:19:06.831793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f8e88 00:27:03.543 [2024-11-19 13:19:06.832501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.543 [2024-11-19 13:19:06.832521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:03.543 [2024-11-19 13:19:06.843111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f0350 00:27:03.543 [2024-11-19 13:19:06.844257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.543 [2024-11-19 13:19:06.844275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:03.543 [2024-11-19 13:19:06.852719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166efae0 00:27:03.543 [2024-11-19 13:19:06.853993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.543 [2024-11-19 13:19:06.854012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:03.543 [2024-11-19 13:19:06.862531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fb8b8 00:27:03.543 [2024-11-19 13:19:06.863924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.543 [2024-11-19 13:19:06.863946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:03.543 [2024-11-19 13:19:06.871840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f9f68 00:27:03.543 [2024-11-19 13:19:06.873176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.543 [2024-11-19 13:19:06.873195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.543 [2024-11-19 13:19:06.878125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f20d8 00:27:03.543 [2024-11-19 13:19:06.878802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.543 [2024-11-19 13:19:06.878820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:03.543 [2024-11-19 13:19:06.888706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166eea00 00:27:03.543 [2024-11-19 13:19:06.889680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.543 [2024-11-19 13:19:06.889700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:03.543 [2024-11-19 13:19:06.898300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f81e0 00:27:03.543 [2024-11-19 13:19:06.899179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.543 [2024-11-19 13:19:06.899198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:03.543 [2024-11-19 13:19:06.906804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e4578 00:27:03.543 [2024-11-19 13:19:06.907631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.543 [2024-11-19 13:19:06.907652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:03.543 [2024-11-19 13:19:06.916001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e4140 00:27:03.543 [2024-11-19 13:19:06.916787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.544 [2024-11-19 13:19:06.916806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:03.803 [2024-11-19 13:19:06.924749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166fd208 00:27:03.803 [2024-11-19 13:19:06.925537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.803 [2024-11-19 13:19:06.925557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.803 [2024-11-19 13:19:06.934615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f0bc0 00:27:03.803 [2024-11-19 13:19:06.935598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.803 [2024-11-19 13:19:06.935618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:03.803 [2024-11-19 13:19:06.944897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f4f40 00:27:03.803 [2024-11-19 13:19:06.945625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.803 [2024-11-19 13:19:06.945645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.803 [2024-11-19 13:19:06.955019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e6300 00:27:03.803 [2024-11-19 13:19:06.956415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.803 [2024-11-19 13:19:06.956434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:03.803 [2024-11-19 13:19:06.963558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166f9b30 00:27:03.803 [2024-11-19 13:19:06.964623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.803 [2024-11-19 13:19:06.964642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:03.803 [2024-11-19 13:19:06.971945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166eee38 00:27:03.803 [2024-11-19 13:19:06.973201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.803 [2024-11-19 13:19:06.973220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:03.803 [2024-11-19 13:19:06.979851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166e88f8 00:27:03.803 [2024-11-19 13:19:06.980522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.804 [2024-11-19 13:19:06.980541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:03.804 [2024-11-19 13:19:06.990042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ceb640) with pdu=0x2000166eb760 00:27:03.804 [2024-11-19 13:19:06.990840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.804 [2024-11-19 13:19:06.990858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:03.804 27665.50 IOPS, 108.07 MiB/s 00:27:03.804 Latency(us) 00:27:03.804 [2024-11-19T12:19:07.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.804 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:03.804 nvme0n1 : 2.00 27665.21 108.07 0.00 0.00 4620.56 1937.59 11853.47 00:27:03.804 [2024-11-19T12:19:07.181Z] =================================================================================================================== 00:27:03.804 [2024-11-19T12:19:07.181Z] Total : 27665.21 108.07 0.00 0.00 4620.56 1937.59 11853.47 00:27:03.804 { 00:27:03.804 "results": [ 00:27:03.804 { 00:27:03.804 "job": "nvme0n1", 00:27:03.804 "core_mask": "0x2", 00:27:03.804 "workload": "randwrite", 00:27:03.804 "status": "finished", 00:27:03.804 "queue_depth": 128, 00:27:03.804 "io_size": 4096, 00:27:03.804 "runtime": 2.004648, 00:27:03.804 "iops": 27665.20606111397, 00:27:03.804 "mibps": 108.06721117622645, 00:27:03.804 "io_failed": 0, 00:27:03.804 "io_timeout": 0, 00:27:03.804 "avg_latency_us": 4620.561666722851, 00:27:03.804 "min_latency_us": 1937.5860869565217, 00:27:03.804 "max_latency_us": 11853.467826086957 00:27:03.804 } 00:27:03.804 ], 00:27:03.804 "core_count": 1 00:27:03.804 } 00:27:03.804 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:03.804 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:03.804 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:03.804 | .driver_specific 00:27:03.804 | .nvme_error 00:27:03.804 | .status_code 00:27:03.804 | .command_transient_transport_error' 00:27:03.804 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:04.063 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 217 > 0 )) 00:27:04.063 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2993138 00:27:04.063 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2993138 ']' 00:27:04.063 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2993138 00:27:04.063 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:04.063 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:04.063 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2993138 00:27:04.063 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:04.063 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:04.063 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2993138' 00:27:04.063 killing process with pid 2993138 00:27:04.064 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2993138 00:27:04.064 Received shutdown signal, test time was about 2.000000 seconds 00:27:04.064 00:27:04.064 Latency(us) 00:27:04.064 [2024-11-19T12:19:07.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.064 [2024-11-19T12:19:07.441Z] =================================================================================================================== 00:27:04.064 [2024-11-19T12:19:07.441Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:04.064 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2993138 00:27:04.064 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:04.064 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:04.064 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:04.064 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:04.064 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:04.064 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2993756 00:27:04.064 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2993756 /var/tmp/bperf.sock 00:27:04.064 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:04.064 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2993756 ']' 00:27:04.064 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:04.064 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:04.064 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:04.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:04.064 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:04.064 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.324 [2024-11-19 13:19:07.478705] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:27:04.324 [2024-11-19 13:19:07.478751] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2993756 ] 00:27:04.324 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:04.324 Zero copy mechanism will not be used. 00:27:04.324 [2024-11-19 13:19:07.554102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.324 [2024-11-19 13:19:07.596842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.324 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:04.324 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:04.324 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:04.324 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:04.583 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:04.583 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.583 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.583 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.583 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:04.583 13:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:05.152 nvme0n1 00:27:05.152 13:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:05.152 13:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.152 13:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:05.152 13:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.152 13:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:05.152 13:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:05.152 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:05.152 Zero copy mechanism will not be used. 00:27:05.152 Running I/O for 2 seconds... 00:27:05.152 [2024-11-19 13:19:08.427526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.152 [2024-11-19 13:19:08.427612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.152 [2024-11-19 13:19:08.427642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.152 [2024-11-19 13:19:08.432216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.152 [2024-11-19 13:19:08.432288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.152 [2024-11-19 13:19:08.432316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.152 [2024-11-19 13:19:08.437194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.152 [2024-11-19 13:19:08.437266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.152 [2024-11-19 13:19:08.437287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.152 [2024-11-19 13:19:08.441612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.152 [2024-11-19 13:19:08.441672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.152 [2024-11-19 13:19:08.441693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.152 [2024-11-19 13:19:08.446453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.152 [2024-11-19 13:19:08.446536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.152 [2024-11-19 13:19:08.446557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.152 [2024-11-19 13:19:08.451311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.152 [2024-11-19 13:19:08.451373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.152 [2024-11-19 13:19:08.451393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.152 [2024-11-19 13:19:08.456714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.152 [2024-11-19 13:19:08.456771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.152 [2024-11-19 13:19:08.456790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.152 [2024-11-19 13:19:08.462093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.152 [2024-11-19 13:19:08.462164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.152 [2024-11-19 13:19:08.462182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.152 [2024-11-19 13:19:08.467639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.152 [2024-11-19 13:19:08.467695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.152 [2024-11-19 13:19:08.467714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.152 [2024-11-19 13:19:08.473177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.152 [2024-11-19 13:19:08.473270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.152 [2024-11-19 13:19:08.473291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.152 [2024-11-19 13:19:08.478848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.152 [2024-11-19 13:19:08.478914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.152 [2024-11-19 13:19:08.478935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.152 [2024-11-19 13:19:08.483873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.152 [2024-11-19 13:19:08.483996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.152 [2024-11-19 13:19:08.484017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.152 [2024-11-19 13:19:08.488817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.152 [2024-11-19 13:19:08.488884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.153 [2024-11-19 13:19:08.488902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.153 [2024-11-19 13:19:08.493578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.153 [2024-11-19 13:19:08.493694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.153 [2024-11-19 13:19:08.493715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.153 [2024-11-19 13:19:08.498051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.153 [2024-11-19 13:19:08.498159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.153 [2024-11-19 13:19:08.498180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.153 [2024-11-19 13:19:08.503855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.153 [2024-11-19 13:19:08.503931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.153 [2024-11-19 13:19:08.503957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.153 [2024-11-19 13:19:08.508496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.153 [2024-11-19 13:19:08.508553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.153 [2024-11-19 13:19:08.508573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.153 [2024-11-19 13:19:08.512910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.153 [2024-11-19 13:19:08.512971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.153 [2024-11-19 13:19:08.512990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.153 [2024-11-19 13:19:08.517360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.153 [2024-11-19 13:19:08.517443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.153 [2024-11-19 13:19:08.517463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.153 [2024-11-19 13:19:08.521792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.153 [2024-11-19 13:19:08.521857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.153 [2024-11-19 13:19:08.521876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.153 [2024-11-19 13:19:08.526282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.153 [2024-11-19 13:19:08.526355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.153 [2024-11-19 13:19:08.526375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.414 [2024-11-19 13:19:08.530787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.414 [2024-11-19 13:19:08.530849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.414 [2024-11-19 13:19:08.530869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.414 [2024-11-19 13:19:08.535501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.414 [2024-11-19 13:19:08.535565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.414 [2024-11-19 13:19:08.535585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.414 [2024-11-19 13:19:08.540313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.414 [2024-11-19 13:19:08.540388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.414 [2024-11-19 13:19:08.540407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.414 [2024-11-19 13:19:08.544681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.414 [2024-11-19 13:19:08.544744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.414 [2024-11-19 13:19:08.544763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.414 [2024-11-19 13:19:08.549155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.414 [2024-11-19 13:19:08.549221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.414 [2024-11-19 13:19:08.549240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.414 [2024-11-19 13:19:08.553534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.414 [2024-11-19 13:19:08.553640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.414 [2024-11-19 13:19:08.553662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.414 [2024-11-19 13:19:08.557928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.414 [2024-11-19 13:19:08.558017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.414 [2024-11-19 13:19:08.558041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.414 [2024-11-19 13:19:08.563047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.414 [2024-11-19 13:19:08.563122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.414 [2024-11-19 13:19:08.563141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.414 [2024-11-19 13:19:08.567599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.414 [2024-11-19 13:19:08.567667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.414 [2024-11-19 13:19:08.567686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.414 [2024-11-19 13:19:08.572039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.414 [2024-11-19 13:19:08.572101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.414 [2024-11-19 13:19:08.572120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.414 [2024-11-19 13:19:08.576445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.414 [2024-11-19 13:19:08.576512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.414 [2024-11-19 13:19:08.576533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.414 [2024-11-19 13:19:08.580926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.414 [2024-11-19 13:19:08.581005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.414 [2024-11-19 13:19:08.581025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.414 [2024-11-19 13:19:08.585382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.414 [2024-11-19 13:19:08.585438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.414 [2024-11-19 13:19:08.585457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.414 [2024-11-19 13:19:08.589842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.414 [2024-11-19 13:19:08.589908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.414 [2024-11-19 13:19:08.589927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.414 [2024-11-19 13:19:08.594738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.414 [2024-11-19 13:19:08.594807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.414 [2024-11-19 13:19:08.594826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.414 [2024-11-19 13:19:08.599342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.414 [2024-11-19 13:19:08.599406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.414 [2024-11-19 13:19:08.599426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.414 [2024-11-19 13:19:08.603696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.414 [2024-11-19 13:19:08.603753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.414 [2024-11-19 13:19:08.603773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.414 [2024-11-19 13:19:08.608060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.414 [2024-11-19 13:19:08.608162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.414 [2024-11-19 13:19:08.608184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.414 [2024-11-19 13:19:08.612483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.414 [2024-11-19 13:19:08.612538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.414 [2024-11-19 13:19:08.612557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.414 [2024-11-19 13:19:08.616882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.414 [2024-11-19 13:19:08.616936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.414 [2024-11-19 13:19:08.616962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.414 [2024-11-19 13:19:08.621268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.414 [2024-11-19 13:19:08.621389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.414 [2024-11-19 13:19:08.621409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.414 [2024-11-19 13:19:08.625657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.414 [2024-11-19 13:19:08.625715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.414 [2024-11-19 13:19:08.625734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.414 [2024-11-19 13:19:08.630078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.414 [2024-11-19 13:19:08.630143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.414 [2024-11-19 13:19:08.630162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.414 [2024-11-19 13:19:08.634431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.414 [2024-11-19 13:19:08.634512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.634532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.638818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.638887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.638907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.643188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.643257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.643276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.647503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.647566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.647585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.651843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.651909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.651928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.656382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.656435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.656454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.660941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.661012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.661030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.666323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.666391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.666409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.671713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.671822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.671842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.677903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.678077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.678102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.685523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.685602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.685623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.692331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.692388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.692407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.698444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.698574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.698593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.703908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.704006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.704026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.708986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.709106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.709125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.713820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.713876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.713895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.718643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.718712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.718731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.723539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.723622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.723642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.728709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.728825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.728843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.733485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.733570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.733589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.738672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.738728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.738746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.743279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.743344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.743363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.747848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.747915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.747934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.752445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.752555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.752574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.757202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.757280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.757298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.761882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.761959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.761978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.766602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.766671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.415 [2024-11-19 13:19:08.766690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.415 [2024-11-19 13:19:08.771124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.415 [2024-11-19 13:19:08.771231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.416 [2024-11-19 13:19:08.771251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.416 [2024-11-19 13:19:08.775522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.416 [2024-11-19 13:19:08.775591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.416 [2024-11-19 13:19:08.775609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.416 [2024-11-19 13:19:08.779945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.416 [2024-11-19 13:19:08.780019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.416 [2024-11-19 13:19:08.780037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.416 [2024-11-19 13:19:08.784550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.416 [2024-11-19 13:19:08.784625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.416 [2024-11-19 13:19:08.784645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.676 [2024-11-19 13:19:08.788987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.789043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.789062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.793624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.793686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.793705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.797975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.798049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.798067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.802424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.802494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.802513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.806769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.806830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.806852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.811274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.811335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.811355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.815666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.815720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.815739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.820116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.820181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.820200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.824651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.824720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.824739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.829079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.829136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.829155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.833675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.833739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.833757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.838107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.838217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.838236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.842698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.842754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.842772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.847073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.847130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.847148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.851496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.851569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.851587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.856113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.856184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.856203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.860697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.860767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.860786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.865159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.865215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.865233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.869568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.869641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.869660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.873930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.874000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.874018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.878290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.878355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.878373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.882796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.882855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.882873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.887166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.887219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.887237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.891639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.891731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.891750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.896098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.896201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.896219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.900555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.677 [2024-11-19 13:19:08.900629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.677 [2024-11-19 13:19:08.900648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.677 [2024-11-19 13:19:08.904931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:08.904995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:08.905014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:08.909393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:08.909447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:08.909466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:08.913899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:08.914064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:08.914082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:08.919014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:08.919089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:08.919108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:08.923938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:08.924041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:08.924063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:08.929985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:08.930176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:08.930194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:08.935909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:08.936075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:08.936095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:08.941094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:08.941172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:08.941191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:08.946564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:08.946673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:08.946691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:08.951190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:08.951241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:08.951259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:08.955612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:08.955684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:08.955702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:08.959988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:08.960054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:08.960073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:08.964353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:08.964417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:08.964436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:08.968687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:08.968757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:08.968775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:08.973061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:08.973122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:08.973141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:08.977557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:08.977631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:08.977649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:08.981886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:08.981939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:08.981963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:08.986324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:08.986392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:08.986411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:08.990680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:08.990744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:08.990764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:08.994972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:08.995038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:08.995058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:08.999551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:08.999613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:08.999631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:09.003879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:09.003944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:09.003969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:09.008216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:09.008273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:09.008291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:09.012694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:09.012822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:09.012840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:09.017095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:09.017152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:09.017170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:09.021417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:09.021527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:09.021546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:09.026107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:09.026178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.678 [2024-11-19 13:19:09.026196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.678 [2024-11-19 13:19:09.030824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.678 [2024-11-19 13:19:09.030891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.679 [2024-11-19 13:19:09.030909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.679 [2024-11-19 13:19:09.035391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.679 [2024-11-19 13:19:09.035503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.679 [2024-11-19 13:19:09.035521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.679 [2024-11-19 13:19:09.039869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.679 [2024-11-19 13:19:09.039993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.679 [2024-11-19 13:19:09.040011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.679 [2024-11-19 13:19:09.044252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.679 [2024-11-19 13:19:09.044367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.679 [2024-11-19 13:19:09.044388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.679 [2024-11-19 13:19:09.048731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.679 [2024-11-19 13:19:09.048784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.679 [2024-11-19 13:19:09.048803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.949 [2024-11-19 13:19:09.053124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.949 [2024-11-19 13:19:09.053234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.949 [2024-11-19 13:19:09.053253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.949 [2024-11-19 13:19:09.057702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.949 [2024-11-19 13:19:09.057760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.949 [2024-11-19 13:19:09.057778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.949 [2024-11-19 13:19:09.062586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.949 [2024-11-19 13:19:09.062638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.949 [2024-11-19 13:19:09.062657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.949 [2024-11-19 13:19:09.067732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.949 [2024-11-19 13:19:09.067790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.949 [2024-11-19 13:19:09.067808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.949 [2024-11-19 13:19:09.073126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.949 [2024-11-19 13:19:09.073180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.949 [2024-11-19 13:19:09.073198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.949 [2024-11-19 13:19:09.078298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.949 [2024-11-19 13:19:09.078355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.949 [2024-11-19 13:19:09.078373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.949 [2024-11-19 13:19:09.083293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.949 [2024-11-19 13:19:09.083430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.949 [2024-11-19 13:19:09.083449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.949 [2024-11-19 13:19:09.088557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.949 [2024-11-19 13:19:09.088626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.949 [2024-11-19 13:19:09.088645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.949 [2024-11-19 13:19:09.093698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.949 [2024-11-19 13:19:09.093753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.949 [2024-11-19 13:19:09.093772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.949 [2024-11-19 13:19:09.098330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.949 [2024-11-19 13:19:09.098407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.949 [2024-11-19 13:19:09.098426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.949 [2024-11-19 13:19:09.103277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.949 [2024-11-19 13:19:09.103687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.949 [2024-11-19 13:19:09.103707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.949 [2024-11-19 13:19:09.108583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.949 [2024-11-19 13:19:09.108642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.949 [2024-11-19 13:19:09.108661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.949 [2024-11-19 13:19:09.113585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.949 [2024-11-19 13:19:09.113659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.949 [2024-11-19 13:19:09.113678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.949 [2024-11-19 13:19:09.118802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.949 [2024-11-19 13:19:09.118912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.949 [2024-11-19 13:19:09.118931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.949 [2024-11-19 13:19:09.123555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.949 [2024-11-19 13:19:09.123644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.123663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.128429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.128529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.128548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.133724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.133782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.133800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.139171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.139230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.139249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.143825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.143896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.143915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.148512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.148581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.148599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.152900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.153013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.153032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.157423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.157530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.157549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.162054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.162138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.162156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.166704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.166782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.166802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.171701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.171797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.171819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.176350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.176412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.176431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.180989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.181045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.181063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.185874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.185938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.185963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.190589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.190662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.190682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.195282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.195356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.195375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.200284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.200395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.200414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.204934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.205016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.205035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.209348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.209404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.209422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.213687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.213760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.213778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.218054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.218129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.218148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.222398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.222471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.222489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.226941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.227036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.227055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.231607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.231673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.231691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.236340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.236507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.236527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.241711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.241876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.241894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.247942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.248089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.248108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.254412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.254507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.254525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.260494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.260659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.260677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.266707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.266868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.266887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.272870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.273019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.273038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.279245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.279414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.279434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.285594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.285745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.285764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.292537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.292689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.292708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.299241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.299375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.299393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.306982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.307111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.307131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.314301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.314458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.314480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.950 [2024-11-19 13:19:09.322260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:05.950 [2024-11-19 13:19:09.322416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.950 [2024-11-19 13:19:09.322435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.210 [2024-11-19 13:19:09.329815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.210 [2024-11-19 13:19:09.329977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.210 [2024-11-19 13:19:09.329996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.210 [2024-11-19 13:19:09.336594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.210 [2024-11-19 13:19:09.336669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.210 [2024-11-19 13:19:09.336688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.210 [2024-11-19 13:19:09.341989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.210 [2024-11-19 13:19:09.342075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.210 [2024-11-19 13:19:09.342093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.210 [2024-11-19 13:19:09.346720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.210 [2024-11-19 13:19:09.346794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.210 [2024-11-19 13:19:09.346813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.210 [2024-11-19 13:19:09.351507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.210 [2024-11-19 13:19:09.351566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.210 [2024-11-19 13:19:09.351584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.210 [2024-11-19 13:19:09.356202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.210 [2024-11-19 13:19:09.356301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.210 [2024-11-19 13:19:09.356319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.210 [2024-11-19 13:19:09.360929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.210 [2024-11-19 13:19:09.361004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.210 [2024-11-19 13:19:09.361023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.211 [2024-11-19 13:19:09.365959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.211 [2024-11-19 13:19:09.366036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.211 [2024-11-19 13:19:09.366054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.211 [2024-11-19 13:19:09.370634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.211 [2024-11-19 13:19:09.370734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.211 [2024-11-19 13:19:09.370752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.211 [2024-11-19 13:19:09.375276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.211 [2024-11-19 13:19:09.375347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.211 [2024-11-19 13:19:09.375365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.211 [2024-11-19 13:19:09.380334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.211 [2024-11-19 13:19:09.380408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.211 [2024-11-19 13:19:09.380427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.211 [2024-11-19 13:19:09.385050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.211 [2024-11-19 13:19:09.385104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.211 [2024-11-19 13:19:09.385122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.211 [2024-11-19 13:19:09.389850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.211 [2024-11-19 13:19:09.389923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.211 [2024-11-19 13:19:09.389942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.211 [2024-11-19 13:19:09.394556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.211 [2024-11-19 13:19:09.394674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.211 [2024-11-19 13:19:09.394693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.211 [2024-11-19 13:19:09.399247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.211 [2024-11-19 13:19:09.399314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.211 [2024-11-19 13:19:09.399333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.211 [2024-11-19 13:19:09.403966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.211 [2024-11-19 13:19:09.404060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.211 [2024-11-19 13:19:09.404078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.211 [2024-11-19 13:19:09.408719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.211 [2024-11-19 13:19:09.408800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.211 [2024-11-19 13:19:09.408821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.211 [2024-11-19 13:19:09.413466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.211 [2024-11-19 13:19:09.413554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.211 [2024-11-19 13:19:09.413572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.211 [2024-11-19 13:19:09.418247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.211 [2024-11-19 13:19:09.418322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.211 [2024-11-19 13:19:09.418341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.211 [2024-11-19 13:19:09.422819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.211 [2024-11-19 13:19:09.422888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.211 [2024-11-19 13:19:09.422906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.211 6333.00 IOPS, 791.62 MiB/s [2024-11-19T12:19:09.588Z] [2024-11-19 13:19:09.429172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.211 [2024-11-19 13:19:09.429236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.211 [2024-11-19 13:19:09.429255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.211 [2024-11-19 13:19:09.433882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.211 [2024-11-19 13:19:09.433956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.211 [2024-11-19 13:19:09.433975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.211 [2024-11-19 13:19:09.438772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.211 [2024-11-19 13:19:09.438843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.211 [2024-11-19 13:19:09.438863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.211 [2024-11-19 13:19:09.443508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.211 [2024-11-19 13:19:09.443566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.211 [2024-11-19 13:19:09.443585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.211 [2024-11-19 13:19:09.448115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.211 [2024-11-19 13:19:09.448180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.211 [2024-11-19 13:19:09.448204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.211 [2024-11-19 13:19:09.452639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.211 [2024-11-19 13:19:09.452710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.211 [2024-11-19 13:19:09.452729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.211 [2024-11-19 13:19:09.457073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.211 [2024-11-19 13:19:09.457132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.211 [2024-11-19 13:19:09.457151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.211 [2024-11-19 13:19:09.461497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.211 [2024-11-19 13:19:09.461566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.211 [2024-11-19 13:19:09.461584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.211 [2024-11-19 13:19:09.465935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.211 [2024-11-19 13:19:09.466014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.211 [2024-11-19 13:19:09.466033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.211 [2024-11-19 13:19:09.470517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.212 [2024-11-19 13:19:09.470588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.212 [2024-11-19 13:19:09.470608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.212 [2024-11-19 13:19:09.474945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.212 [2024-11-19 13:19:09.475019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.212 [2024-11-19 13:19:09.475038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.212 [2024-11-19 13:19:09.479564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.212 [2024-11-19 13:19:09.479639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.212 [2024-11-19 13:19:09.479658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.212 [2024-11-19 13:19:09.484038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.212 [2024-11-19 13:19:09.484107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.212 [2024-11-19 13:19:09.484126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.212 [2024-11-19 13:19:09.488487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.212 [2024-11-19 13:19:09.488557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.212 [2024-11-19 13:19:09.488576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.212 [2024-11-19 13:19:09.493280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.212 [2024-11-19 13:19:09.493353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.212 [2024-11-19 13:19:09.493371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.212 [2024-11-19 13:19:09.497961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.212 [2024-11-19 13:19:09.498021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.212 [2024-11-19 13:19:09.498039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.212 [2024-11-19 13:19:09.503324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.212 [2024-11-19 13:19:09.503384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.212 [2024-11-19 13:19:09.503403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.212 [2024-11-19 13:19:09.508786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.212 [2024-11-19 13:19:09.508841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.212 [2024-11-19 13:19:09.508860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.212 [2024-11-19 13:19:09.513876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.212 [2024-11-19 13:19:09.513972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.212 [2024-11-19 13:19:09.513992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.212 [2024-11-19 13:19:09.518776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.212 [2024-11-19 13:19:09.518833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.212 [2024-11-19 13:19:09.518852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.212 [2024-11-19 13:19:09.524114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.212 [2024-11-19 13:19:09.524175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.212 [2024-11-19 13:19:09.524193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.212 [2024-11-19 13:19:09.529306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.212 [2024-11-19 13:19:09.529359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.212 [2024-11-19 13:19:09.529378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.212 [2024-11-19 13:19:09.534671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.212 [2024-11-19 13:19:09.534755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.212 [2024-11-19 13:19:09.534774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.212 [2024-11-19 13:19:09.540871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.212 [2024-11-19 13:19:09.540927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.212 [2024-11-19 13:19:09.540946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.212 [2024-11-19 13:19:09.546135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.212 [2024-11-19 13:19:09.546203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.212 [2024-11-19 13:19:09.546221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.212 [2024-11-19 13:19:09.550883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.212 [2024-11-19 13:19:09.550953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.212 [2024-11-19 13:19:09.550972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.212 [2024-11-19 13:19:09.555378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.212 [2024-11-19 13:19:09.555496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.212 [2024-11-19 13:19:09.555514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.212 [2024-11-19 13:19:09.560019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.212 [2024-11-19 13:19:09.560068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.212 [2024-11-19 13:19:09.560087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.212 [2024-11-19 13:19:09.564457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.212 [2024-11-19 13:19:09.564518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.212 [2024-11-19 13:19:09.564536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.212 [2024-11-19 13:19:09.568904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.212 [2024-11-19 13:19:09.568970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.212 [2024-11-19 13:19:09.568988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.212 [2024-11-19 13:19:09.573386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.212 [2024-11-19 13:19:09.573456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.212 [2024-11-19 13:19:09.573482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.212 [2024-11-19 13:19:09.577852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.212 [2024-11-19 13:19:09.577910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.212 [2024-11-19 13:19:09.577929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.212 [2024-11-19 13:19:09.582532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.212 [2024-11-19 13:19:09.582586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.212 [2024-11-19 13:19:09.582604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.472 [2024-11-19 13:19:09.587298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.472 [2024-11-19 13:19:09.587358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.472 [2024-11-19 13:19:09.587376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.472 [2024-11-19 13:19:09.591859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.472 [2024-11-19 13:19:09.591932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.472 [2024-11-19 13:19:09.591960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.472 [2024-11-19 13:19:09.596282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.472 [2024-11-19 13:19:09.596339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.472 [2024-11-19 13:19:09.596357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.472 [2024-11-19 13:19:09.600672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.472 [2024-11-19 13:19:09.600732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.472 [2024-11-19 13:19:09.600750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.472 [2024-11-19 13:19:09.605262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.472 [2024-11-19 13:19:09.605326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.472 [2024-11-19 13:19:09.605344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.472 [2024-11-19 13:19:09.609724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.472 [2024-11-19 13:19:09.609797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.472 [2024-11-19 13:19:09.609816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.472 [2024-11-19 13:19:09.614169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.472 [2024-11-19 13:19:09.614239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.472 [2024-11-19 13:19:09.614258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.472 [2024-11-19 13:19:09.618794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.472 [2024-11-19 13:19:09.618861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.472 [2024-11-19 13:19:09.618879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.472 [2024-11-19 13:19:09.623214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.472 [2024-11-19 13:19:09.623288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.472 [2024-11-19 13:19:09.623307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.472 [2024-11-19 13:19:09.627780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.627850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.627868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.632272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.632334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.632352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.636689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.636781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.636799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.641147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.641216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.641235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.645819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.645963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.645982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.650751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.650885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.650904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.656256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.656327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.656346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.661502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.661601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.661620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.666554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.666611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.666629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.672358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.672435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.672453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.677648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.677780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.677798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.682913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.683003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.683022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.688248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.688301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.688319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.693917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.693987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.694008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.699825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.699972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.699996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.706711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.706872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.706893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.713868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.713925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.713945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.721205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.721342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.721362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.726768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.726823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.726841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.732195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.732256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.732274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.737347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.737414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.737432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.742407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.742475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.742495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.747230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.747290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.747310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.751984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.752062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.752081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.756619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.756729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.756749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.761433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.761488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.761506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.766091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.766151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.766169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.770792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.770854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.770873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.775399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.775496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.775515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.780056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.780112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.780133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.784622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.784678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.473 [2024-11-19 13:19:09.784698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.473 [2024-11-19 13:19:09.789733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.473 [2024-11-19 13:19:09.789785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.474 [2024-11-19 13:19:09.789806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.474 [2024-11-19 13:19:09.795542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.474 [2024-11-19 13:19:09.795589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.474 [2024-11-19 13:19:09.795610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.474 [2024-11-19 13:19:09.801572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.474 [2024-11-19 13:19:09.801712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.474 [2024-11-19 13:19:09.801734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.474 [2024-11-19 13:19:09.808990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.474 [2024-11-19 13:19:09.809057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.474 [2024-11-19 13:19:09.809076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.474 [2024-11-19 13:19:09.815435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.474 [2024-11-19 13:19:09.815506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.474 [2024-11-19 13:19:09.815526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.474 [2024-11-19 13:19:09.822059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.474 [2024-11-19 13:19:09.822122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.474 [2024-11-19 13:19:09.822142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.474 [2024-11-19 13:19:09.828830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.474 [2024-11-19 13:19:09.829007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.474 [2024-11-19 13:19:09.829029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.474 [2024-11-19 13:19:09.835027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.474 [2024-11-19 13:19:09.835081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.474 [2024-11-19 13:19:09.835100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.474 [2024-11-19 13:19:09.840454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.474 [2024-11-19 13:19:09.840533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.474 [2024-11-19 13:19:09.840553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.474 [2024-11-19 13:19:09.845133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.474 [2024-11-19 13:19:09.845199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.474 [2024-11-19 13:19:09.845222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.733 [2024-11-19 13:19:09.849877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.733 [2024-11-19 13:19:09.849932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.733 [2024-11-19 13:19:09.849958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.733 [2024-11-19 13:19:09.854523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.854589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.854608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.859264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.859330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.859348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.864025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.864089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.864108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.868816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.868924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.868943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.873383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.873444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.873463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.878075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.878131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.878149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.882819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.882940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.882974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.888189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.888246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.888265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.893494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.893562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.893581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.898990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.899042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.899061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.904468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.904528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.904547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.909414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.909539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.909558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.914100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.914161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.914180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.919064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.919173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.919191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.924512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.924629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.924649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.929601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.929676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.929696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.934243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.934322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.934341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.939016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.939080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.939098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.943841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.943907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.943926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.948561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.948640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.948660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.953417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.953472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.953490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.958277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.958337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.958355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.963069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.963127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.963146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.967614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.967678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.967697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.972243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.972352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.972374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.977248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.977321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.977341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.734 [2024-11-19 13:19:09.982233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.734 [2024-11-19 13:19:09.982324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.734 [2024-11-19 13:19:09.982343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.735 [2024-11-19 13:19:09.988251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.735 [2024-11-19 13:19:09.988429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.735 [2024-11-19 13:19:09.988448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.735 [2024-11-19 13:19:09.994098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.735 [2024-11-19 13:19:09.994200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.735 [2024-11-19 13:19:09.994218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.735 [2024-11-19 13:19:09.999570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.735 [2024-11-19 13:19:09.999661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.735 [2024-11-19 13:19:09.999680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.735 [2024-11-19 13:19:10.004893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.735 [2024-11-19 13:19:10.004954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.735 [2024-11-19 13:19:10.004974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.735 [2024-11-19 13:19:10.009883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.735 [2024-11-19 13:19:10.010021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.735 [2024-11-19 13:19:10.010041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.735 [2024-11-19 13:19:10.015070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.735 [2024-11-19 13:19:10.015207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.735 [2024-11-19 13:19:10.015226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.735 [2024-11-19 13:19:10.020959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.735 [2024-11-19 13:19:10.021044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.735 [2024-11-19 13:19:10.021065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.735 [2024-11-19 13:19:10.027016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.735 [2024-11-19 13:19:10.027147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.735 [2024-11-19 13:19:10.027166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.735 [2024-11-19 13:19:10.032066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.735 [2024-11-19 13:19:10.032139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.735 [2024-11-19 13:19:10.032158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.735 [2024-11-19 13:19:10.036821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.735 [2024-11-19 13:19:10.036883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.735 [2024-11-19 13:19:10.036902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.735 [2024-11-19 13:19:10.041642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.735 [2024-11-19 13:19:10.041704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.735 [2024-11-19 13:19:10.041724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.735 [2024-11-19 13:19:10.046286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.735 [2024-11-19 13:19:10.046366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.735 [2024-11-19 13:19:10.046385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.735 [2024-11-19 13:19:10.051296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.735 [2024-11-19 13:19:10.051372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.735 [2024-11-19 13:19:10.051392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.735 [2024-11-19 13:19:10.057225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.735 [2024-11-19 13:19:10.057391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.735 [2024-11-19 13:19:10.057410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.735 [2024-11-19 13:19:10.064193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.735 [2024-11-19 13:19:10.064334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.735 [2024-11-19 13:19:10.064354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.735 [2024-11-19 13:19:10.072194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.735 [2024-11-19 13:19:10.072342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.735 [2024-11-19 13:19:10.072361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.735 [2024-11-19 13:19:10.078851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.735 [2024-11-19 13:19:10.079204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.735 [2024-11-19 13:19:10.079225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.735 [2024-11-19 13:19:10.085981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.735 [2024-11-19 13:19:10.086277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.735 [2024-11-19 13:19:10.086299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.735 [2024-11-19 13:19:10.092750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.735 [2024-11-19 13:19:10.093000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.735 [2024-11-19 13:19:10.093021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.735 [2024-11-19 13:19:10.100071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.735 [2024-11-19 13:19:10.100365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.735 [2024-11-19 13:19:10.100385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.735 [2024-11-19 13:19:10.107141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.735 [2024-11-19 13:19:10.107527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.735 [2024-11-19 13:19:10.107548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.114024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.114332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.114352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.120633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.120964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.120986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.127577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.127869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.127894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.134563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.134901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.134922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.141278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.141593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.141613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.148087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.148417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.148437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.154284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.154518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.154538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.160272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.160564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.160585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.165480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.165733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.165754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.170343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.170596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.170616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.174996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.175251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.175271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.179455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.179716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.179737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.183818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.184089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.184109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.188355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.188625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.188646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.192836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.193099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.193119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.197112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.197374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.197395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.201396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.201653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.201673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.205659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.205923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.205943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.209911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.210171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.210191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.214186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.214445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.214465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.218451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.218708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.218728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.223381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.223622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.223643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.229234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.229491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.229512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.236505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.236830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.236850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.243122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.243488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.243509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.249107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.249426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.249446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.255413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.255746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.255766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.261458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.261804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.261825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.267645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.267998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.268022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.273916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.274482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.274503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.280202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.280487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.280507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.286822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.287130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.287151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.294420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.294660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.294681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.301123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.301689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.301710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.308427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.308647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.308667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.315813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.316098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.316118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.321902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.322148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.322168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.327044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.327289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.327310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.996 [2024-11-19 13:19:10.331722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.996 [2024-11-19 13:19:10.331990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.996 [2024-11-19 13:19:10.332010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.997 [2024-11-19 13:19:10.336273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.997 [2024-11-19 13:19:10.336514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-11-19 13:19:10.336534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.997 [2024-11-19 13:19:10.340651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.997 [2024-11-19 13:19:10.340905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-11-19 13:19:10.340926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.997 [2024-11-19 13:19:10.345111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.997 [2024-11-19 13:19:10.345363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-11-19 13:19:10.345383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.997 [2024-11-19 13:19:10.349604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.997 [2024-11-19 13:19:10.349857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-11-19 13:19:10.349877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.997 [2024-11-19 13:19:10.354111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.997 [2024-11-19 13:19:10.354368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-11-19 13:19:10.354388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.997 [2024-11-19 13:19:10.358632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.997 [2024-11-19 13:19:10.358895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-11-19 13:19:10.358915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.997 [2024-11-19 13:19:10.363245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.997 [2024-11-19 13:19:10.363499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-11-19 13:19:10.363520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.997 [2024-11-19 13:19:10.367664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:06.997 [2024-11-19 13:19:10.367914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.997 [2024-11-19 13:19:10.367934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.256 [2024-11-19 13:19:10.372150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:07.256 [2024-11-19 13:19:10.372409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.256 [2024-11-19 13:19:10.372429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.256 [2024-11-19 13:19:10.376794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:07.256 [2024-11-19 13:19:10.377053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.256 [2024-11-19 13:19:10.377073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.256 [2024-11-19 13:19:10.382330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:07.256 [2024-11-19 13:19:10.382585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.256 [2024-11-19 13:19:10.382605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.256 [2024-11-19 13:19:10.387197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:07.256 [2024-11-19 13:19:10.387497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.256 [2024-11-19 13:19:10.387517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.256 [2024-11-19 13:19:10.393403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:07.256 [2024-11-19 13:19:10.393739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.256 [2024-11-19 13:19:10.393759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.256 [2024-11-19 13:19:10.400470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:07.256 [2024-11-19 13:19:10.400797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.256 [2024-11-19 13:19:10.400817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.256 [2024-11-19 13:19:10.407563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:07.257 [2024-11-19 13:19:10.407916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.257 [2024-11-19 13:19:10.407936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.257 [2024-11-19 13:19:10.414667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:07.257 [2024-11-19 13:19:10.414969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.257 [2024-11-19 13:19:10.414994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.257 [2024-11-19 13:19:10.421376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:07.257 [2024-11-19 13:19:10.421706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.257 [2024-11-19 13:19:10.421726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.257 6085.50 IOPS, 760.69 MiB/s [2024-11-19T12:19:10.634Z] [2024-11-19 13:19:10.429573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cebb20) with pdu=0x2000166ff3c8 00:27:07.257 [2024-11-19 13:19:10.429719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.257 [2024-11-19 13:19:10.429738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.257 00:27:07.257 Latency(us) 00:27:07.257 [2024-11-19T12:19:10.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.257 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:07.257 nvme0n1 : 2.00 6080.30 760.04 0.00 0.00 2626.12 1816.49 8491.19 00:27:07.257 [2024-11-19T12:19:10.634Z] =================================================================================================================== 00:27:07.257 [2024-11-19T12:19:10.634Z] Total : 6080.30 760.04 0.00 0.00 2626.12 1816.49 8491.19 00:27:07.257 { 00:27:07.257 "results": [ 00:27:07.257 { 00:27:07.257 "job": "nvme0n1", 00:27:07.257 "core_mask": "0x2", 00:27:07.257 "workload": "randwrite", 00:27:07.257 "status": "finished", 00:27:07.257 "queue_depth": 16, 00:27:07.257 "io_size": 131072, 00:27:07.257 "runtime": 2.004343, 00:27:07.257 "iops": 6080.29663585524, 00:27:07.257 "mibps": 760.037079481905, 00:27:07.257 "io_failed": 0, 00:27:07.257 "io_timeout": 0, 00:27:07.257 "avg_latency_us": 2626.1210844056927, 00:27:07.257 "min_latency_us": 1816.486956521739, 00:27:07.257 "max_latency_us": 8491.186086956523 00:27:07.257 } 00:27:07.257 ], 00:27:07.257 "core_count": 1 00:27:07.257 } 00:27:07.257 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:07.257 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:07.257 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:07.257 | .driver_specific 00:27:07.257 | .nvme_error 00:27:07.257 | .status_code 00:27:07.257 | .command_transient_transport_error' 00:27:07.257 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:07.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 394 > 0 )) 00:27:07.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2993756 00:27:07.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2993756 ']' 00:27:07.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2993756 00:27:07.516 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:07.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:07.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2993756 00:27:07.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:07.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:07.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2993756' 00:27:07.517 killing process with pid 2993756 00:27:07.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2993756 00:27:07.517 Received shutdown signal, test time was about 2.000000 seconds 00:27:07.517 00:27:07.517 Latency(us) 00:27:07.517 [2024-11-19T12:19:10.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.517 [2024-11-19T12:19:10.894Z] =================================================================================================================== 00:27:07.517 [2024-11-19T12:19:10.894Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:07.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2993756 00:27:07.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2991996 00:27:07.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2991996 ']' 00:27:07.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2991996 00:27:07.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:07.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:07.517 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2991996 00:27:07.776 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:07.776 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:07.776 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2991996' 00:27:07.776 killing process with pid 2991996 00:27:07.776 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2991996 00:27:07.776 13:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2991996 00:27:07.776 00:27:07.776 real 0m13.985s 00:27:07.776 user 0m26.862s 00:27:07.776 sys 0m4.512s 00:27:07.776 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:07.776 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:07.776 ************************************ 00:27:07.776 END TEST nvmf_digest_error 00:27:07.776 ************************************ 00:27:07.776 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:07.776 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:07.776 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:07.776 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:07.776 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:07.776 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:07.776 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:07.776 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:07.776 rmmod nvme_tcp 00:27:07.776 rmmod nvme_fabrics 00:27:07.776 rmmod nvme_keyring 00:27:08.036 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:08.036 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:08.036 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:08.036 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2991996 ']' 00:27:08.036 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2991996 00:27:08.036 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2991996 ']' 00:27:08.036 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2991996 00:27:08.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2991996) - No such process 00:27:08.036 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2991996 is not found' 00:27:08.036 Process with pid 2991996 is not found 00:27:08.036 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:08.036 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:08.036 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:08.036 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:08.036 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:08.036 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:08.036 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:08.036 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:08.036 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:08.036 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.036 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.036 13:19:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.944 13:19:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:09.944 00:27:09.944 real 0m36.315s 00:27:09.944 user 0m55.403s 00:27:09.944 sys 0m13.702s 00:27:09.944 13:19:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:09.944 13:19:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:09.944 ************************************ 00:27:09.944 END TEST nvmf_digest 00:27:09.945 ************************************ 00:27:09.945 13:19:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:09.945 13:19:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:09.945 13:19:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:09.945 13:19:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:09.945 13:19:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:09.945 13:19:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:09.945 13:19:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.205 ************************************ 00:27:10.205 START TEST nvmf_bdevperf 00:27:10.205 ************************************ 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:10.205 * Looking for test storage... 00:27:10.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:10.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.205 --rc genhtml_branch_coverage=1 00:27:10.205 --rc genhtml_function_coverage=1 00:27:10.205 --rc genhtml_legend=1 00:27:10.205 --rc geninfo_all_blocks=1 00:27:10.205 --rc geninfo_unexecuted_blocks=1 00:27:10.205 00:27:10.205 ' 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:10.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.205 --rc genhtml_branch_coverage=1 00:27:10.205 --rc genhtml_function_coverage=1 00:27:10.205 --rc genhtml_legend=1 00:27:10.205 --rc geninfo_all_blocks=1 00:27:10.205 --rc geninfo_unexecuted_blocks=1 00:27:10.205 00:27:10.205 ' 00:27:10.205 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:10.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.205 --rc genhtml_branch_coverage=1 00:27:10.205 --rc genhtml_function_coverage=1 00:27:10.205 --rc genhtml_legend=1 00:27:10.205 --rc geninfo_all_blocks=1 00:27:10.206 --rc geninfo_unexecuted_blocks=1 00:27:10.206 00:27:10.206 ' 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:10.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.206 --rc genhtml_branch_coverage=1 00:27:10.206 --rc genhtml_function_coverage=1 00:27:10.206 --rc genhtml_legend=1 00:27:10.206 --rc geninfo_all_blocks=1 00:27:10.206 --rc geninfo_unexecuted_blocks=1 00:27:10.206 00:27:10.206 ' 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:10.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:10.206 13:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.781 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:16.781 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:16.781 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:16.781 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:16.781 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:16.781 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:16.781 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:16.781 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:16.781 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:16.781 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:16.781 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:16.781 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:16.781 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:16.781 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:16.781 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:16.781 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:16.781 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:16.782 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:16.782 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:16.782 Found net devices under 0000:86:00.0: cvl_0_0 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:16.782 Found net devices under 0000:86:00.1: cvl_0_1 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:16.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:16.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:27:16.782 00:27:16.782 --- 10.0.0.2 ping statistics --- 00:27:16.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.782 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:16.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:16.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:27:16.782 00:27:16.782 --- 10.0.0.1 ping statistics --- 00:27:16.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.782 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2997768 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2997768 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2997768 ']' 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.782 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.783 [2024-11-19 13:19:19.527551] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:27:16.783 [2024-11-19 13:19:19.527598] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.783 [2024-11-19 13:19:19.603076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:16.783 [2024-11-19 13:19:19.643387] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.783 [2024-11-19 13:19:19.643424] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.783 [2024-11-19 13:19:19.643431] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.783 [2024-11-19 13:19:19.643437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.783 [2024-11-19 13:19:19.643442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.783 [2024-11-19 13:19:19.644856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:16.783 [2024-11-19 13:19:19.644981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:16.783 [2024-11-19 13:19:19.644980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.783 [2024-11-19 13:19:19.789192] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.783 Malloc0 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.783 [2024-11-19 13:19:19.846252] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:16.783 { 00:27:16.783 "params": { 00:27:16.783 "name": "Nvme$subsystem", 00:27:16.783 "trtype": "$TEST_TRANSPORT", 00:27:16.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.783 "adrfam": "ipv4", 00:27:16.783 "trsvcid": "$NVMF_PORT", 00:27:16.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.783 "hdgst": ${hdgst:-false}, 00:27:16.783 "ddgst": ${ddgst:-false} 00:27:16.783 }, 00:27:16.783 "method": "bdev_nvme_attach_controller" 00:27:16.783 } 00:27:16.783 EOF 00:27:16.783 )") 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:16.783 13:19:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:16.783 "params": { 00:27:16.783 "name": "Nvme1", 00:27:16.783 "trtype": "tcp", 00:27:16.783 "traddr": "10.0.0.2", 00:27:16.783 "adrfam": "ipv4", 00:27:16.783 "trsvcid": "4420", 00:27:16.783 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:16.783 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:16.783 "hdgst": false, 00:27:16.783 "ddgst": false 00:27:16.783 }, 00:27:16.783 "method": "bdev_nvme_attach_controller" 00:27:16.783 }' 00:27:16.783 [2024-11-19 13:19:19.896757] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:27:16.783 [2024-11-19 13:19:19.896798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2997798 ] 00:27:16.783 [2024-11-19 13:19:19.975513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.783 [2024-11-19 13:19:20.020466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.041 Running I/O for 1 seconds... 00:27:17.977 10942.00 IOPS, 42.74 MiB/s 00:27:17.977 Latency(us) 00:27:17.977 [2024-11-19T12:19:21.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.977 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:17.977 Verification LBA range: start 0x0 length 0x4000 00:27:17.977 Nvme1n1 : 1.01 10983.84 42.91 0.00 0.00 11608.34 2436.23 13221.18 00:27:17.977 [2024-11-19T12:19:21.354Z] =================================================================================================================== 00:27:17.977 [2024-11-19T12:19:21.354Z] Total : 10983.84 42.91 0.00 0.00 11608.34 2436.23 13221.18 00:27:18.236 13:19:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2998061 00:27:18.236 13:19:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:18.236 13:19:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:18.236 13:19:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:18.236 13:19:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:18.236 13:19:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:18.236 13:19:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:18.236 13:19:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:18.236 { 00:27:18.236 "params": { 00:27:18.236 "name": "Nvme$subsystem", 00:27:18.236 "trtype": "$TEST_TRANSPORT", 00:27:18.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.236 "adrfam": "ipv4", 00:27:18.236 "trsvcid": "$NVMF_PORT", 00:27:18.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.236 "hdgst": ${hdgst:-false}, 00:27:18.236 "ddgst": ${ddgst:-false} 00:27:18.236 }, 00:27:18.236 "method": "bdev_nvme_attach_controller" 00:27:18.236 } 00:27:18.236 EOF 00:27:18.236 )") 00:27:18.236 13:19:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:18.236 13:19:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:18.236 13:19:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:18.236 13:19:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:18.236 "params": { 00:27:18.236 "name": "Nvme1", 00:27:18.236 "trtype": "tcp", 00:27:18.236 "traddr": "10.0.0.2", 00:27:18.236 "adrfam": "ipv4", 00:27:18.236 "trsvcid": "4420", 00:27:18.236 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:18.236 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:18.236 "hdgst": false, 00:27:18.236 "ddgst": false 00:27:18.236 }, 00:27:18.236 "method": "bdev_nvme_attach_controller" 00:27:18.236 }' 00:27:18.236 [2024-11-19 13:19:21.479507] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:27:18.236 [2024-11-19 13:19:21.479557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2998061 ] 00:27:18.237 [2024-11-19 13:19:21.553386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.237 [2024-11-19 13:19:21.591856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.496 Running I/O for 15 seconds... 00:27:20.810 11010.00 IOPS, 43.01 MiB/s [2024-11-19T12:19:24.446Z] 11052.50 IOPS, 43.17 MiB/s [2024-11-19T12:19:24.446Z] 13:19:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2997768 00:27:21.069 13:19:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:21.330 [2024-11-19 13:19:24.449341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.330 [2024-11-19 13:19:24.449380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.330 [2024-11-19 13:19:24.449406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.330 [2024-11-19 13:19:24.449424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.330 [2024-11-19 13:19:24.449442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.330 [2024-11-19 13:19:24.449458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.330 [2024-11-19 13:19:24.449474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.330 [2024-11-19 13:19:24.449492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.330 [2024-11-19 13:19:24.449509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.330 [2024-11-19 13:19:24.449527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.330 [2024-11-19 13:19:24.449544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.330 [2024-11-19 13:19:24.449560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.330 [2024-11-19 13:19:24.449583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.330 [2024-11-19 13:19:24.449602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.330 [2024-11-19 13:19:24.449623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.330 [2024-11-19 13:19:24.449643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.330 [2024-11-19 13:19:24.449663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.330 [2024-11-19 13:19:24.449679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.330 [2024-11-19 13:19:24.449700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.330 [2024-11-19 13:19:24.449719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.330 [2024-11-19 13:19:24.449737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.330 [2024-11-19 13:19:24.449758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.330 [2024-11-19 13:19:24.449779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.330 [2024-11-19 13:19:24.449800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.330 [2024-11-19 13:19:24.449821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.330 [2024-11-19 13:19:24.449838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.330 [2024-11-19 13:19:24.449854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.330 [2024-11-19 13:19:24.449871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.330 [2024-11-19 13:19:24.449886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.330 [2024-11-19 13:19:24.449902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.330 [2024-11-19 13:19:24.449918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.330 [2024-11-19 13:19:24.449934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.330 [2024-11-19 13:19:24.449957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.330 [2024-11-19 13:19:24.449974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:104360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.330 [2024-11-19 13:19:24.449989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.449998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.330 [2024-11-19 13:19:24.450005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.330 [2024-11-19 13:19:24.450013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.330 [2024-11-19 13:19:24.450020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:104384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.331 [2024-11-19 13:19:24.450035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.331 [2024-11-19 13:19:24.450053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.331 [2024-11-19 13:19:24.450068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.331 [2024-11-19 13:19:24.450083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.331 [2024-11-19 13:19:24.450098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.331 [2024-11-19 13:19:24.450114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.331 [2024-11-19 13:19:24.450131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.331 [2024-11-19 13:19:24.450146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.331 [2024-11-19 13:19:24.450162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.331 [2024-11-19 13:19:24.450178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.331 [2024-11-19 13:19:24.450192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.331 [2024-11-19 13:19:24.450207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.331 [2024-11-19 13:19:24.450222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:103840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.450935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.450941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.451070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.451079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.451088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.451095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.331 [2024-11-19 13:19:24.451103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.331 [2024-11-19 13:19:24.451110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.332 [2024-11-19 13:19:24.451569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.451578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x274acf0 is same with the state(6) to be set 00:27:21.332 [2024-11-19 13:19:24.451587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:21.332 [2024-11-19 13:19:24.451593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:21.332 [2024-11-19 13:19:24.451599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104216 len:8 PRP1 0x0 PRP2 0x0 00:27:21.332 [2024-11-19 13:19:24.451606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.332 [2024-11-19 13:19:24.454500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.332 [2024-11-19 13:19:24.454556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.332 [2024-11-19 13:19:24.455152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.332 [2024-11-19 13:19:24.455199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.332 [2024-11-19 13:19:24.455224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.332 [2024-11-19 13:19:24.455634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.332 [2024-11-19 13:19:24.455812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.332 [2024-11-19 13:19:24.455821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.332 [2024-11-19 13:19:24.455829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.332 [2024-11-19 13:19:24.455837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.332 [2024-11-19 13:19:24.467827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.332 [2024-11-19 13:19:24.468198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.332 [2024-11-19 13:19:24.468247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.332 [2024-11-19 13:19:24.468272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.332 [2024-11-19 13:19:24.468749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.332 [2024-11-19 13:19:24.468921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.332 [2024-11-19 13:19:24.468931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.332 [2024-11-19 13:19:24.468939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.332 [2024-11-19 13:19:24.468951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.332 [2024-11-19 13:19:24.480857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.332 [2024-11-19 13:19:24.481198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.332 [2024-11-19 13:19:24.481245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.332 [2024-11-19 13:19:24.481270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.332 [2024-11-19 13:19:24.481762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.332 [2024-11-19 13:19:24.481927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.332 [2024-11-19 13:19:24.481937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.332 [2024-11-19 13:19:24.481945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.332 [2024-11-19 13:19:24.481957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.332 [2024-11-19 13:19:24.493755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.332 [2024-11-19 13:19:24.494107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.332 [2024-11-19 13:19:24.494141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.332 [2024-11-19 13:19:24.494150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.332 [2024-11-19 13:19:24.494323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.332 [2024-11-19 13:19:24.494496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.332 [2024-11-19 13:19:24.494506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.332 [2024-11-19 13:19:24.494513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.332 [2024-11-19 13:19:24.494520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.332 [2024-11-19 13:19:24.506680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.332 [2024-11-19 13:19:24.507078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.332 [2024-11-19 13:19:24.507096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.332 [2024-11-19 13:19:24.507104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.332 [2024-11-19 13:19:24.507267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.332 [2024-11-19 13:19:24.507430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.332 [2024-11-19 13:19:24.507439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.332 [2024-11-19 13:19:24.507445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.332 [2024-11-19 13:19:24.507453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.332 [2024-11-19 13:19:24.519634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.332 [2024-11-19 13:19:24.519995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.332 [2024-11-19 13:19:24.520014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.332 [2024-11-19 13:19:24.520026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.332 [2024-11-19 13:19:24.520198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.332 [2024-11-19 13:19:24.520371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.332 [2024-11-19 13:19:24.520381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.332 [2024-11-19 13:19:24.520388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.332 [2024-11-19 13:19:24.520394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.332 [2024-11-19 13:19:24.532624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.332 [2024-11-19 13:19:24.533038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.332 [2024-11-19 13:19:24.533057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.332 [2024-11-19 13:19:24.533065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.332 [2024-11-19 13:19:24.533237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.332 [2024-11-19 13:19:24.533410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.332 [2024-11-19 13:19:24.533420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.332 [2024-11-19 13:19:24.533427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.332 [2024-11-19 13:19:24.533434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.332 [2024-11-19 13:19:24.545553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.332 [2024-11-19 13:19:24.545999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.332 [2024-11-19 13:19:24.546046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.333 [2024-11-19 13:19:24.546071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.333 [2024-11-19 13:19:24.546560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.333 [2024-11-19 13:19:24.546723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.333 [2024-11-19 13:19:24.546733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.333 [2024-11-19 13:19:24.546740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.333 [2024-11-19 13:19:24.546746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.333 [2024-11-19 13:19:24.558568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.333 [2024-11-19 13:19:24.558977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.333 [2024-11-19 13:19:24.559023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.333 [2024-11-19 13:19:24.559047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.333 [2024-11-19 13:19:24.559625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.333 [2024-11-19 13:19:24.560114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.333 [2024-11-19 13:19:24.560125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.333 [2024-11-19 13:19:24.560131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.333 [2024-11-19 13:19:24.560138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.333 [2024-11-19 13:19:24.571447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.333 [2024-11-19 13:19:24.571864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.333 [2024-11-19 13:19:24.571882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.333 [2024-11-19 13:19:24.571890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.333 [2024-11-19 13:19:24.572058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.333 [2024-11-19 13:19:24.572223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.333 [2024-11-19 13:19:24.572233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.333 [2024-11-19 13:19:24.572239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.333 [2024-11-19 13:19:24.572246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.333 [2024-11-19 13:19:24.584442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.333 [2024-11-19 13:19:24.584871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.333 [2024-11-19 13:19:24.584916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.333 [2024-11-19 13:19:24.584940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.333 [2024-11-19 13:19:24.585546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.333 [2024-11-19 13:19:24.585997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.333 [2024-11-19 13:19:24.586007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.333 [2024-11-19 13:19:24.586014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.333 [2024-11-19 13:19:24.586021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.333 [2024-11-19 13:19:24.597466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.333 [2024-11-19 13:19:24.597869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.333 [2024-11-19 13:19:24.597887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.333 [2024-11-19 13:19:24.597898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.333 [2024-11-19 13:19:24.598087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.333 [2024-11-19 13:19:24.598261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.333 [2024-11-19 13:19:24.598271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.333 [2024-11-19 13:19:24.598282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.333 [2024-11-19 13:19:24.598289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.333 [2024-11-19 13:19:24.610420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.333 [2024-11-19 13:19:24.610882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.333 [2024-11-19 13:19:24.610900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.333 [2024-11-19 13:19:24.610909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.333 [2024-11-19 13:19:24.611085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.333 [2024-11-19 13:19:24.611259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.333 [2024-11-19 13:19:24.611269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.333 [2024-11-19 13:19:24.611276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.333 [2024-11-19 13:19:24.611283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.333 [2024-11-19 13:19:24.623453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.333 [2024-11-19 13:19:24.623887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.333 [2024-11-19 13:19:24.623932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.333 [2024-11-19 13:19:24.623970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.333 [2024-11-19 13:19:24.624549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.333 [2024-11-19 13:19:24.625041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.333 [2024-11-19 13:19:24.625052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.333 [2024-11-19 13:19:24.625060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.333 [2024-11-19 13:19:24.625067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.333 [2024-11-19 13:19:24.636542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.333 [2024-11-19 13:19:24.636882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.333 [2024-11-19 13:19:24.636901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.333 [2024-11-19 13:19:24.636909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.333 [2024-11-19 13:19:24.637092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.333 [2024-11-19 13:19:24.637270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.333 [2024-11-19 13:19:24.637280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.333 [2024-11-19 13:19:24.637287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.333 [2024-11-19 13:19:24.637294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.333 [2024-11-19 13:19:24.649532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.333 [2024-11-19 13:19:24.649873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.333 [2024-11-19 13:19:24.649890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.333 [2024-11-19 13:19:24.649898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.333 [2024-11-19 13:19:24.650078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.333 [2024-11-19 13:19:24.650251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.333 [2024-11-19 13:19:24.650261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.333 [2024-11-19 13:19:24.650268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.333 [2024-11-19 13:19:24.650275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.333 [2024-11-19 13:19:24.662563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.333 [2024-11-19 13:19:24.662997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.333 [2024-11-19 13:19:24.663044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.333 [2024-11-19 13:19:24.663068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.333 [2024-11-19 13:19:24.663644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.333 [2024-11-19 13:19:24.664020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.333 [2024-11-19 13:19:24.664030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.333 [2024-11-19 13:19:24.664038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.333 [2024-11-19 13:19:24.664046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.333 [2024-11-19 13:19:24.675349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.333 [2024-11-19 13:19:24.675768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.333 [2024-11-19 13:19:24.675814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.333 [2024-11-19 13:19:24.675839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.333 [2024-11-19 13:19:24.676432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.333 [2024-11-19 13:19:24.677030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.333 [2024-11-19 13:19:24.677058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.333 [2024-11-19 13:19:24.677081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.333 [2024-11-19 13:19:24.677102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.333 [2024-11-19 13:19:24.688137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.333 [2024-11-19 13:19:24.688558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.333 [2024-11-19 13:19:24.688575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.333 [2024-11-19 13:19:24.688586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.333 [2024-11-19 13:19:24.688749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.333 [2024-11-19 13:19:24.688912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.333 [2024-11-19 13:19:24.688921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.333 [2024-11-19 13:19:24.688928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.333 [2024-11-19 13:19:24.688934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.333 [2024-11-19 13:19:24.701374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.333 [2024-11-19 13:19:24.701835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.333 [2024-11-19 13:19:24.701853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.333 [2024-11-19 13:19:24.701862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.333 [2024-11-19 13:19:24.702059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.333 [2024-11-19 13:19:24.702239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.333 [2024-11-19 13:19:24.702249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.333 [2024-11-19 13:19:24.702256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.333 [2024-11-19 13:19:24.702263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.592 [2024-11-19 13:19:24.714519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.592 [2024-11-19 13:19:24.714962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-19 13:19:24.714985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.592 [2024-11-19 13:19:24.714995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.592 [2024-11-19 13:19:24.715174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.592 [2024-11-19 13:19:24.715353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.592 [2024-11-19 13:19:24.715363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.592 [2024-11-19 13:19:24.715372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.592 [2024-11-19 13:19:24.715379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.592 [2024-11-19 13:19:24.727628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.592 [2024-11-19 13:19:24.728066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-19 13:19:24.728086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.592 [2024-11-19 13:19:24.728095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.592 [2024-11-19 13:19:24.728277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.592 [2024-11-19 13:19:24.728459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.592 [2024-11-19 13:19:24.728469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.592 [2024-11-19 13:19:24.728477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.592 [2024-11-19 13:19:24.728484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.592 [2024-11-19 13:19:24.740628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.592 [2024-11-19 13:19:24.740980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-19 13:19:24.740997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.592 [2024-11-19 13:19:24.741007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.592 [2024-11-19 13:19:24.741169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.592 [2024-11-19 13:19:24.741332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.592 [2024-11-19 13:19:24.741342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.592 [2024-11-19 13:19:24.741348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.592 [2024-11-19 13:19:24.741355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.592 [2024-11-19 13:19:24.753434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.592 [2024-11-19 13:19:24.753852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-19 13:19:24.753869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.592 [2024-11-19 13:19:24.753877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.592 [2024-11-19 13:19:24.754063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.592 [2024-11-19 13:19:24.754236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.592 [2024-11-19 13:19:24.754246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.592 [2024-11-19 13:19:24.754253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.592 [2024-11-19 13:19:24.754262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.592 [2024-11-19 13:19:24.766337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.592 [2024-11-19 13:19:24.766757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-19 13:19:24.766775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.592 [2024-11-19 13:19:24.766782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.592 [2024-11-19 13:19:24.766945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.592 [2024-11-19 13:19:24.767139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.592 [2024-11-19 13:19:24.767149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.592 [2024-11-19 13:19:24.767160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.592 [2024-11-19 13:19:24.767167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.592 [2024-11-19 13:19:24.779215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.592 [2024-11-19 13:19:24.779554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-19 13:19:24.779572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.592 [2024-11-19 13:19:24.779580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.592 [2024-11-19 13:19:24.779752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.592 [2024-11-19 13:19:24.779924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.592 [2024-11-19 13:19:24.779933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.592 [2024-11-19 13:19:24.779941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.592 [2024-11-19 13:19:24.779953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.592 [2024-11-19 13:19:24.792163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.592 [2024-11-19 13:19:24.792579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-19 13:19:24.792625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.592 [2024-11-19 13:19:24.792650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.592 [2024-11-19 13:19:24.793243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.592 [2024-11-19 13:19:24.793407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.592 [2024-11-19 13:19:24.793417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.592 [2024-11-19 13:19:24.793423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.592 [2024-11-19 13:19:24.793430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.592 [2024-11-19 13:19:24.809126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.592 9772.33 IOPS, 38.17 MiB/s [2024-11-19T12:19:24.969Z] [2024-11-19 13:19:24.809658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-19 13:19:24.809703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.592 [2024-11-19 13:19:24.809728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.592 [2024-11-19 13:19:24.810298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.592 [2024-11-19 13:19:24.810553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.592 [2024-11-19 13:19:24.810567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.592 [2024-11-19 13:19:24.810578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.592 [2024-11-19 13:19:24.810588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.592 [2024-11-19 13:19:24.822061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.592 [2024-11-19 13:19:24.822416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-19 13:19:24.822434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.592 [2024-11-19 13:19:24.822442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.592 [2024-11-19 13:19:24.822609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.592 [2024-11-19 13:19:24.822776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.592 [2024-11-19 13:19:24.822785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.592 [2024-11-19 13:19:24.822791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.592 [2024-11-19 13:19:24.822798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.592 [2024-11-19 13:19:24.834884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.592 [2024-11-19 13:19:24.835307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-19 13:19:24.835324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.592 [2024-11-19 13:19:24.835332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.592 [2024-11-19 13:19:24.835494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.592 [2024-11-19 13:19:24.835658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.592 [2024-11-19 13:19:24.835667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.592 [2024-11-19 13:19:24.835674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.592 [2024-11-19 13:19:24.835681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.592 [2024-11-19 13:19:24.847878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.592 [2024-11-19 13:19:24.848240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-19 13:19:24.848257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.592 [2024-11-19 13:19:24.848265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.592 [2024-11-19 13:19:24.848437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.592 [2024-11-19 13:19:24.848609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.592 [2024-11-19 13:19:24.848618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.592 [2024-11-19 13:19:24.848625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.592 [2024-11-19 13:19:24.848632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.592 [2024-11-19 13:19:24.860870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.592 [2024-11-19 13:19:24.861343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-19 13:19:24.861407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.592 [2024-11-19 13:19:24.861443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.592 [2024-11-19 13:19:24.862126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.592 [2024-11-19 13:19:24.862601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.592 [2024-11-19 13:19:24.862611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.592 [2024-11-19 13:19:24.862618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.592 [2024-11-19 13:19:24.862625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.592 [2024-11-19 13:19:24.873836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.592 [2024-11-19 13:19:24.874256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.592 [2024-11-19 13:19:24.874273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.592 [2024-11-19 13:19:24.874281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.592 [2024-11-19 13:19:24.874443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.592 [2024-11-19 13:19:24.874606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.592 [2024-11-19 13:19:24.874616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.592 [2024-11-19 13:19:24.874622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.593 [2024-11-19 13:19:24.874629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.593 [2024-11-19 13:19:24.886711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.593 [2024-11-19 13:19:24.887110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-19 13:19:24.887128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.593 [2024-11-19 13:19:24.887136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.593 [2024-11-19 13:19:24.887298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.593 [2024-11-19 13:19:24.887461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.593 [2024-11-19 13:19:24.887471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.593 [2024-11-19 13:19:24.887477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.593 [2024-11-19 13:19:24.887484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.593 [2024-11-19 13:19:24.899620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.593 [2024-11-19 13:19:24.900052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-19 13:19:24.900100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.593 [2024-11-19 13:19:24.900125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.593 [2024-11-19 13:19:24.900669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.593 [2024-11-19 13:19:24.900832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.593 [2024-11-19 13:19:24.900842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.593 [2024-11-19 13:19:24.900848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.593 [2024-11-19 13:19:24.900855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.593 [2024-11-19 13:19:24.912480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.593 [2024-11-19 13:19:24.912882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-19 13:19:24.912900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.593 [2024-11-19 13:19:24.912908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.593 [2024-11-19 13:19:24.913096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.593 [2024-11-19 13:19:24.913269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.593 [2024-11-19 13:19:24.913279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.593 [2024-11-19 13:19:24.913286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.593 [2024-11-19 13:19:24.913293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.593 [2024-11-19 13:19:24.925333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.593 [2024-11-19 13:19:24.925729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-19 13:19:24.925746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.593 [2024-11-19 13:19:24.925753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.593 [2024-11-19 13:19:24.925915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.593 [2024-11-19 13:19:24.926104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.593 [2024-11-19 13:19:24.926114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.593 [2024-11-19 13:19:24.926121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.593 [2024-11-19 13:19:24.926128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.593 [2024-11-19 13:19:24.938180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.593 [2024-11-19 13:19:24.938585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-19 13:19:24.938602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.593 [2024-11-19 13:19:24.938610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.593 [2024-11-19 13:19:24.938771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.593 [2024-11-19 13:19:24.938933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.593 [2024-11-19 13:19:24.938942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.593 [2024-11-19 13:19:24.938958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.593 [2024-11-19 13:19:24.938966] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.593 [2024-11-19 13:19:24.951042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.593 [2024-11-19 13:19:24.951335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-19 13:19:24.951352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.593 [2024-11-19 13:19:24.951360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.593 [2024-11-19 13:19:24.951523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.593 [2024-11-19 13:19:24.951685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.593 [2024-11-19 13:19:24.951695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.593 [2024-11-19 13:19:24.951702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.593 [2024-11-19 13:19:24.951708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.593 [2024-11-19 13:19:24.964195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.593 [2024-11-19 13:19:24.964595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.593 [2024-11-19 13:19:24.964613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.593 [2024-11-19 13:19:24.964621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.593 [2024-11-19 13:19:24.964792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.593 [2024-11-19 13:19:24.964970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.593 [2024-11-19 13:19:24.964996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.593 [2024-11-19 13:19:24.965003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.593 [2024-11-19 13:19:24.965012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.853 [2024-11-19 13:19:24.977112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.853 [2024-11-19 13:19:24.977460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-11-19 13:19:24.977478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.853 [2024-11-19 13:19:24.977486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.853 [2024-11-19 13:19:24.977648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.853 [2024-11-19 13:19:24.977811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.853 [2024-11-19 13:19:24.977821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.853 [2024-11-19 13:19:24.977828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.853 [2024-11-19 13:19:24.977834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.853 [2024-11-19 13:19:24.990026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.853 [2024-11-19 13:19:24.990447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-11-19 13:19:24.990465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.853 [2024-11-19 13:19:24.990472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.853 [2024-11-19 13:19:24.990635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.853 [2024-11-19 13:19:24.990798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.853 [2024-11-19 13:19:24.990807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.853 [2024-11-19 13:19:24.990813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.853 [2024-11-19 13:19:24.990820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.853 [2024-11-19 13:19:25.002808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.853 [2024-11-19 13:19:25.003075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-11-19 13:19:25.003093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.853 [2024-11-19 13:19:25.003101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.853 [2024-11-19 13:19:25.003265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.853 [2024-11-19 13:19:25.003429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.853 [2024-11-19 13:19:25.003439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.853 [2024-11-19 13:19:25.003446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.853 [2024-11-19 13:19:25.003453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.853 [2024-11-19 13:19:25.015778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.853 [2024-11-19 13:19:25.016202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-11-19 13:19:25.016220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.853 [2024-11-19 13:19:25.016227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.853 [2024-11-19 13:19:25.016390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.853 [2024-11-19 13:19:25.016552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.853 [2024-11-19 13:19:25.016562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.853 [2024-11-19 13:19:25.016568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.853 [2024-11-19 13:19:25.016575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.853 [2024-11-19 13:19:25.028557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.853 [2024-11-19 13:19:25.028974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-11-19 13:19:25.029028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.853 [2024-11-19 13:19:25.029052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.853 [2024-11-19 13:19:25.029630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.853 [2024-11-19 13:19:25.030198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.853 [2024-11-19 13:19:25.030209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.853 [2024-11-19 13:19:25.030215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.853 [2024-11-19 13:19:25.030223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.853 [2024-11-19 13:19:25.041493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.853 [2024-11-19 13:19:25.041808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.853 [2024-11-19 13:19:25.041826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.853 [2024-11-19 13:19:25.041833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.853 [2024-11-19 13:19:25.042018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.853 [2024-11-19 13:19:25.042191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.853 [2024-11-19 13:19:25.042201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.853 [2024-11-19 13:19:25.042208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.853 [2024-11-19 13:19:25.042215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.853 [2024-11-19 13:19:25.054310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.853 [2024-11-19 13:19:25.054709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-11-19 13:19:25.054726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.854 [2024-11-19 13:19:25.054733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.854 [2024-11-19 13:19:25.054896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.854 [2024-11-19 13:19:25.055084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.854 [2024-11-19 13:19:25.055095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.854 [2024-11-19 13:19:25.055102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.854 [2024-11-19 13:19:25.055109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.854 [2024-11-19 13:19:25.067189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.854 [2024-11-19 13:19:25.067583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-11-19 13:19:25.067600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.854 [2024-11-19 13:19:25.067608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.854 [2024-11-19 13:19:25.067769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.854 [2024-11-19 13:19:25.067934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.854 [2024-11-19 13:19:25.067944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.854 [2024-11-19 13:19:25.067956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.854 [2024-11-19 13:19:25.067963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.854 [2024-11-19 13:19:25.080045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.854 [2024-11-19 13:19:25.080462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-11-19 13:19:25.080503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.854 [2024-11-19 13:19:25.080529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.854 [2024-11-19 13:19:25.081102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.854 [2024-11-19 13:19:25.081276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.854 [2024-11-19 13:19:25.081286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.854 [2024-11-19 13:19:25.081293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.854 [2024-11-19 13:19:25.081299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.854 [2024-11-19 13:19:25.092850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.854 [2024-11-19 13:19:25.093294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-11-19 13:19:25.093341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.854 [2024-11-19 13:19:25.093366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.854 [2024-11-19 13:19:25.093940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.854 [2024-11-19 13:19:25.094133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.854 [2024-11-19 13:19:25.094143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.854 [2024-11-19 13:19:25.094150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.854 [2024-11-19 13:19:25.094157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.854 [2024-11-19 13:19:25.105756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.854 [2024-11-19 13:19:25.106108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-11-19 13:19:25.106126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.854 [2024-11-19 13:19:25.106134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.854 [2024-11-19 13:19:25.106305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.854 [2024-11-19 13:19:25.106477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.854 [2024-11-19 13:19:25.106487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.854 [2024-11-19 13:19:25.106498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.854 [2024-11-19 13:19:25.106517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.854 [2024-11-19 13:19:25.118655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.854 [2024-11-19 13:19:25.119083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-11-19 13:19:25.119101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.854 [2024-11-19 13:19:25.119108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.854 [2024-11-19 13:19:25.119271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.854 [2024-11-19 13:19:25.119434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.854 [2024-11-19 13:19:25.119444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.854 [2024-11-19 13:19:25.119450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.854 [2024-11-19 13:19:25.119457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.854 [2024-11-19 13:19:25.131528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.854 [2024-11-19 13:19:25.131870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-11-19 13:19:25.131887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.854 [2024-11-19 13:19:25.131895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.854 [2024-11-19 13:19:25.132083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.854 [2024-11-19 13:19:25.132255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.854 [2024-11-19 13:19:25.132265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.854 [2024-11-19 13:19:25.132272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.854 [2024-11-19 13:19:25.132278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.854 [2024-11-19 13:19:25.144422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.854 [2024-11-19 13:19:25.144826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-11-19 13:19:25.144870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.854 [2024-11-19 13:19:25.144894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.854 [2024-11-19 13:19:25.145399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.854 [2024-11-19 13:19:25.145573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.854 [2024-11-19 13:19:25.145583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.854 [2024-11-19 13:19:25.145590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.854 [2024-11-19 13:19:25.145597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.854 [2024-11-19 13:19:25.157263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.854 [2024-11-19 13:19:25.157601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-11-19 13:19:25.157618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.854 [2024-11-19 13:19:25.157626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.854 [2024-11-19 13:19:25.157789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.854 [2024-11-19 13:19:25.157957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.854 [2024-11-19 13:19:25.157967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.854 [2024-11-19 13:19:25.157974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.854 [2024-11-19 13:19:25.157980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.854 [2024-11-19 13:19:25.170101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.854 [2024-11-19 13:19:25.170513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.854 [2024-11-19 13:19:25.170551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.854 [2024-11-19 13:19:25.170577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.854 [2024-11-19 13:19:25.171119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.854 [2024-11-19 13:19:25.171283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.854 [2024-11-19 13:19:25.171292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.854 [2024-11-19 13:19:25.171299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.854 [2024-11-19 13:19:25.171305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.854 [2024-11-19 13:19:25.182925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.855 [2024-11-19 13:19:25.183348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-11-19 13:19:25.183395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.855 [2024-11-19 13:19:25.183420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.855 [2024-11-19 13:19:25.184011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.855 [2024-11-19 13:19:25.184229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.855 [2024-11-19 13:19:25.184238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.855 [2024-11-19 13:19:25.184244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.855 [2024-11-19 13:19:25.184252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.855 [2024-11-19 13:19:25.195847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.855 [2024-11-19 13:19:25.196280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-11-19 13:19:25.196297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.855 [2024-11-19 13:19:25.196309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.855 [2024-11-19 13:19:25.196472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.855 [2024-11-19 13:19:25.196635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.855 [2024-11-19 13:19:25.196645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.855 [2024-11-19 13:19:25.196651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.855 [2024-11-19 13:19:25.196658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.855 [2024-11-19 13:19:25.208742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.855 [2024-11-19 13:19:25.209162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-11-19 13:19:25.209179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.855 [2024-11-19 13:19:25.209187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.855 [2024-11-19 13:19:25.209349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.855 [2024-11-19 13:19:25.209511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.855 [2024-11-19 13:19:25.209521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.855 [2024-11-19 13:19:25.209528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.855 [2024-11-19 13:19:25.209536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:21.855 [2024-11-19 13:19:25.221861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:21.855 [2024-11-19 13:19:25.222283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.855 [2024-11-19 13:19:25.222302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:21.855 [2024-11-19 13:19:25.222310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:21.855 [2024-11-19 13:19:25.222488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:21.855 [2024-11-19 13:19:25.222667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:21.855 [2024-11-19 13:19:25.222677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:21.855 [2024-11-19 13:19:25.222684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:21.855 [2024-11-19 13:19:25.222690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.115 [2024-11-19 13:19:25.234906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.115 [2024-11-19 13:19:25.235309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.115 [2024-11-19 13:19:25.235327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.115 [2024-11-19 13:19:25.235335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.115 [2024-11-19 13:19:25.235498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.115 [2024-11-19 13:19:25.235664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.115 [2024-11-19 13:19:25.235673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.115 [2024-11-19 13:19:25.235680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.115 [2024-11-19 13:19:25.235686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.115 [2024-11-19 13:19:25.247715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.115 [2024-11-19 13:19:25.248140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.115 [2024-11-19 13:19:25.248187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.115 [2024-11-19 13:19:25.248213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.115 [2024-11-19 13:19:25.248693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.115 [2024-11-19 13:19:25.248857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.115 [2024-11-19 13:19:25.248866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.115 [2024-11-19 13:19:25.248873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.115 [2024-11-19 13:19:25.248879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.115 [2024-11-19 13:19:25.260618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.115 [2024-11-19 13:19:25.261040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.115 [2024-11-19 13:19:25.261058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.115 [2024-11-19 13:19:25.261066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.115 [2024-11-19 13:19:25.261238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.115 [2024-11-19 13:19:25.261410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.115 [2024-11-19 13:19:25.261419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.115 [2024-11-19 13:19:25.261426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.115 [2024-11-19 13:19:25.261433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.115 [2024-11-19 13:19:25.273540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.115 [2024-11-19 13:19:25.273847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.115 [2024-11-19 13:19:25.273864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.115 [2024-11-19 13:19:25.273871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.115 [2024-11-19 13:19:25.274057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.115 [2024-11-19 13:19:25.274230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.115 [2024-11-19 13:19:25.274240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.115 [2024-11-19 13:19:25.274251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.115 [2024-11-19 13:19:25.274258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.115 [2024-11-19 13:19:25.286380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.115 [2024-11-19 13:19:25.286779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.115 [2024-11-19 13:19:25.286796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.115 [2024-11-19 13:19:25.286804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.115 [2024-11-19 13:19:25.286972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.115 [2024-11-19 13:19:25.287159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.115 [2024-11-19 13:19:25.287169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.115 [2024-11-19 13:19:25.287176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.115 [2024-11-19 13:19:25.287182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.115 [2024-11-19 13:19:25.299178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.115 [2024-11-19 13:19:25.299521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.115 [2024-11-19 13:19:25.299539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.115 [2024-11-19 13:19:25.299547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.115 [2024-11-19 13:19:25.299709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.115 [2024-11-19 13:19:25.299872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.115 [2024-11-19 13:19:25.299881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.115 [2024-11-19 13:19:25.299888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.115 [2024-11-19 13:19:25.299895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.115 [2024-11-19 13:19:25.311974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.115 [2024-11-19 13:19:25.312387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.115 [2024-11-19 13:19:25.312428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.115 [2024-11-19 13:19:25.312455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.115 [2024-11-19 13:19:25.313045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.115 [2024-11-19 13:19:25.313626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.115 [2024-11-19 13:19:25.313651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.115 [2024-11-19 13:19:25.313673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.115 [2024-11-19 13:19:25.313694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.115 [2024-11-19 13:19:25.324828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.115 [2024-11-19 13:19:25.325262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.115 [2024-11-19 13:19:25.325307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.115 [2024-11-19 13:19:25.325332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.115 [2024-11-19 13:19:25.325746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.115 [2024-11-19 13:19:25.325909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.115 [2024-11-19 13:19:25.325919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.115 [2024-11-19 13:19:25.325925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.115 [2024-11-19 13:19:25.325932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.115 [2024-11-19 13:19:25.337761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.115 [2024-11-19 13:19:25.338167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.115 [2024-11-19 13:19:25.338213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.115 [2024-11-19 13:19:25.338238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.115 [2024-11-19 13:19:25.338711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.115 [2024-11-19 13:19:25.338884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.116 [2024-11-19 13:19:25.338893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.116 [2024-11-19 13:19:25.338900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.116 [2024-11-19 13:19:25.338907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.116 [2024-11-19 13:19:25.350570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.116 [2024-11-19 13:19:25.350990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.116 [2024-11-19 13:19:25.351039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.116 [2024-11-19 13:19:25.351063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.116 [2024-11-19 13:19:25.351586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.116 [2024-11-19 13:19:25.351749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.116 [2024-11-19 13:19:25.351759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.116 [2024-11-19 13:19:25.351767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.116 [2024-11-19 13:19:25.351773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.116 [2024-11-19 13:19:25.363488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.116 [2024-11-19 13:19:25.363837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.116 [2024-11-19 13:19:25.363853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.116 [2024-11-19 13:19:25.363864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.116 [2024-11-19 13:19:25.364050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.116 [2024-11-19 13:19:25.364223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.116 [2024-11-19 13:19:25.364232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.116 [2024-11-19 13:19:25.364239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.116 [2024-11-19 13:19:25.364246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.116 [2024-11-19 13:19:25.376292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.116 [2024-11-19 13:19:25.376646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.116 [2024-11-19 13:19:25.376691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.116 [2024-11-19 13:19:25.376715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.116 [2024-11-19 13:19:25.377307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.116 [2024-11-19 13:19:25.377888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.116 [2024-11-19 13:19:25.377919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.116 [2024-11-19 13:19:25.377926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.116 [2024-11-19 13:19:25.377933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.116 [2024-11-19 13:19:25.389092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.116 [2024-11-19 13:19:25.389434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.116 [2024-11-19 13:19:25.389452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.116 [2024-11-19 13:19:25.389460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.116 [2024-11-19 13:19:25.389622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.116 [2024-11-19 13:19:25.389785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.116 [2024-11-19 13:19:25.389794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.116 [2024-11-19 13:19:25.389801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.116 [2024-11-19 13:19:25.389808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.116 [2024-11-19 13:19:25.401895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.116 [2024-11-19 13:19:25.402298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.116 [2024-11-19 13:19:25.402344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.116 [2024-11-19 13:19:25.402368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.116 [2024-11-19 13:19:25.402898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.116 [2024-11-19 13:19:25.403093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.116 [2024-11-19 13:19:25.403104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.116 [2024-11-19 13:19:25.403110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.116 [2024-11-19 13:19:25.403117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.116 [2024-11-19 13:19:25.414714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.116 [2024-11-19 13:19:25.415125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.116 [2024-11-19 13:19:25.415166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.116 [2024-11-19 13:19:25.415192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.116 [2024-11-19 13:19:25.415705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.116 [2024-11-19 13:19:25.415869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.116 [2024-11-19 13:19:25.415878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.116 [2024-11-19 13:19:25.415885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.116 [2024-11-19 13:19:25.415891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.116 [2024-11-19 13:19:25.427520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.116 [2024-11-19 13:19:25.427927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.116 [2024-11-19 13:19:25.427984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.116 [2024-11-19 13:19:25.428010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.116 [2024-11-19 13:19:25.428587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.116 [2024-11-19 13:19:25.429069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.116 [2024-11-19 13:19:25.429079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.116 [2024-11-19 13:19:25.429086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.116 [2024-11-19 13:19:25.429093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.116 [2024-11-19 13:19:25.440339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.116 [2024-11-19 13:19:25.440698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.116 [2024-11-19 13:19:25.440743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.116 [2024-11-19 13:19:25.440767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.116 [2024-11-19 13:19:25.441360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.116 [2024-11-19 13:19:25.441944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.116 [2024-11-19 13:19:25.441978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.116 [2024-11-19 13:19:25.441988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.116 [2024-11-19 13:19:25.441996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.116 [2024-11-19 13:19:25.453118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.116 [2024-11-19 13:19:25.453540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.116 [2024-11-19 13:19:25.453582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.116 [2024-11-19 13:19:25.453604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.116 [2024-11-19 13:19:25.454091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.116 [2024-11-19 13:19:25.454255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.116 [2024-11-19 13:19:25.454264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.116 [2024-11-19 13:19:25.454270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.116 [2024-11-19 13:19:25.454277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.116 [2024-11-19 13:19:25.465944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.116 [2024-11-19 13:19:25.466361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.116 [2024-11-19 13:19:25.466378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.116 [2024-11-19 13:19:25.466386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.116 [2024-11-19 13:19:25.466548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.116 [2024-11-19 13:19:25.466711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.116 [2024-11-19 13:19:25.466721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.116 [2024-11-19 13:19:25.466728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.116 [2024-11-19 13:19:25.466734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.116 [2024-11-19 13:19:25.479121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.116 [2024-11-19 13:19:25.479556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.116 [2024-11-19 13:19:25.479574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.116 [2024-11-19 13:19:25.479583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.116 [2024-11-19 13:19:25.479761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.116 [2024-11-19 13:19:25.479939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.116 [2024-11-19 13:19:25.479952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.116 [2024-11-19 13:19:25.479961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.116 [2024-11-19 13:19:25.479969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.376 [2024-11-19 13:19:25.492152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.376 [2024-11-19 13:19:25.492498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.376 [2024-11-19 13:19:25.492516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.376 [2024-11-19 13:19:25.492524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.377 [2024-11-19 13:19:25.492696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.377 [2024-11-19 13:19:25.492868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.377 [2024-11-19 13:19:25.492878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.377 [2024-11-19 13:19:25.492885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.377 [2024-11-19 13:19:25.492891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.377 [2024-11-19 13:19:25.505044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.377 [2024-11-19 13:19:25.505458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.377 [2024-11-19 13:19:25.505475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.377 [2024-11-19 13:19:25.505482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.377 [2024-11-19 13:19:25.505663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.377 [2024-11-19 13:19:25.505836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.377 [2024-11-19 13:19:25.505845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.377 [2024-11-19 13:19:25.505852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.377 [2024-11-19 13:19:25.505859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.377 [2024-11-19 13:19:25.517930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.377 [2024-11-19 13:19:25.518348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.377 [2024-11-19 13:19:25.518366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.377 [2024-11-19 13:19:25.518373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.377 [2024-11-19 13:19:25.518537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.377 [2024-11-19 13:19:25.518699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.377 [2024-11-19 13:19:25.518708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.377 [2024-11-19 13:19:25.518715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.377 [2024-11-19 13:19:25.518721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.377 [2024-11-19 13:19:25.530769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.377 [2024-11-19 13:19:25.531146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.377 [2024-11-19 13:19:25.531193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.377 [2024-11-19 13:19:25.531225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.377 [2024-11-19 13:19:25.531802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.377 [2024-11-19 13:19:25.532287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.377 [2024-11-19 13:19:25.532297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.377 [2024-11-19 13:19:25.532304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.377 [2024-11-19 13:19:25.532311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.377 [2024-11-19 13:19:25.543691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.377 [2024-11-19 13:19:25.544003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.377 [2024-11-19 13:19:25.544021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.377 [2024-11-19 13:19:25.544029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.377 [2024-11-19 13:19:25.544191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.377 [2024-11-19 13:19:25.544354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.377 [2024-11-19 13:19:25.544363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.377 [2024-11-19 13:19:25.544369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.377 [2024-11-19 13:19:25.544376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.377 [2024-11-19 13:19:25.556612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.377 [2024-11-19 13:19:25.556962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.377 [2024-11-19 13:19:25.556979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.377 [2024-11-19 13:19:25.556987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.377 [2024-11-19 13:19:25.557150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.377 [2024-11-19 13:19:25.557313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.377 [2024-11-19 13:19:25.557323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.377 [2024-11-19 13:19:25.557329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.377 [2024-11-19 13:19:25.557336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.377 [2024-11-19 13:19:25.569506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.377 [2024-11-19 13:19:25.569923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.377 [2024-11-19 13:19:25.569940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.377 [2024-11-19 13:19:25.569953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.377 [2024-11-19 13:19:25.570139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.377 [2024-11-19 13:19:25.570314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.377 [2024-11-19 13:19:25.570324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.377 [2024-11-19 13:19:25.570331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.377 [2024-11-19 13:19:25.570338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.377 [2024-11-19 13:19:25.582293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.377 [2024-11-19 13:19:25.582690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.377 [2024-11-19 13:19:25.582708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.377 [2024-11-19 13:19:25.582715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.377 [2024-11-19 13:19:25.582877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.377 [2024-11-19 13:19:25.583064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.377 [2024-11-19 13:19:25.583075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.377 [2024-11-19 13:19:25.583081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.377 [2024-11-19 13:19:25.583088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.377 [2024-11-19 13:19:25.595157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.377 [2024-11-19 13:19:25.595577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.377 [2024-11-19 13:19:25.595627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.377 [2024-11-19 13:19:25.595652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.377 [2024-11-19 13:19:25.596196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.377 [2024-11-19 13:19:25.596370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.377 [2024-11-19 13:19:25.596380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.377 [2024-11-19 13:19:25.596387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.377 [2024-11-19 13:19:25.596393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.377 [2024-11-19 13:19:25.608046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.377 [2024-11-19 13:19:25.608462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.377 [2024-11-19 13:19:25.608506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.377 [2024-11-19 13:19:25.608531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.377 [2024-11-19 13:19:25.609123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.377 [2024-11-19 13:19:25.609320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.377 [2024-11-19 13:19:25.609330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.377 [2024-11-19 13:19:25.609343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.377 [2024-11-19 13:19:25.609351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.377 [2024-11-19 13:19:25.621222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.377 [2024-11-19 13:19:25.621655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.378 [2024-11-19 13:19:25.621673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.378 [2024-11-19 13:19:25.621682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.378 [2024-11-19 13:19:25.621860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.378 [2024-11-19 13:19:25.622043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.378 [2024-11-19 13:19:25.622053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.378 [2024-11-19 13:19:25.622060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.378 [2024-11-19 13:19:25.622067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.378 [2024-11-19 13:19:25.634361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.378 [2024-11-19 13:19:25.634725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.378 [2024-11-19 13:19:25.634743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.378 [2024-11-19 13:19:25.634752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.378 [2024-11-19 13:19:25.634930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.378 [2024-11-19 13:19:25.635113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.378 [2024-11-19 13:19:25.635124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.378 [2024-11-19 13:19:25.635131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.378 [2024-11-19 13:19:25.635137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.378 [2024-11-19 13:19:25.647459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.378 [2024-11-19 13:19:25.647867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.378 [2024-11-19 13:19:25.647884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.378 [2024-11-19 13:19:25.647893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.378 [2024-11-19 13:19:25.648078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.378 [2024-11-19 13:19:25.648258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.378 [2024-11-19 13:19:25.648269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.378 [2024-11-19 13:19:25.648278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.378 [2024-11-19 13:19:25.648285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.378 [2024-11-19 13:19:25.660617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.378 [2024-11-19 13:19:25.661002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.378 [2024-11-19 13:19:25.661021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.378 [2024-11-19 13:19:25.661030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.378 [2024-11-19 13:19:25.661207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.378 [2024-11-19 13:19:25.661385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.378 [2024-11-19 13:19:25.661395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.378 [2024-11-19 13:19:25.661403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.378 [2024-11-19 13:19:25.661410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.378 [2024-11-19 13:19:25.673754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.378 [2024-11-19 13:19:25.674173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.378 [2024-11-19 13:19:25.674192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.378 [2024-11-19 13:19:25.674200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.378 [2024-11-19 13:19:25.674377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.378 [2024-11-19 13:19:25.674556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.378 [2024-11-19 13:19:25.674566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.378 [2024-11-19 13:19:25.674573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.378 [2024-11-19 13:19:25.674581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.378 [2024-11-19 13:19:25.686924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.378 [2024-11-19 13:19:25.687348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.378 [2024-11-19 13:19:25.687367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.378 [2024-11-19 13:19:25.687375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.378 [2024-11-19 13:19:25.687552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.378 [2024-11-19 13:19:25.687729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.378 [2024-11-19 13:19:25.687739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.378 [2024-11-19 13:19:25.687746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.378 [2024-11-19 13:19:25.687753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.378 [2024-11-19 13:19:25.699808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.378 [2024-11-19 13:19:25.700159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.378 [2024-11-19 13:19:25.700177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.378 [2024-11-19 13:19:25.700189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.378 [2024-11-19 13:19:25.700361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.378 [2024-11-19 13:19:25.700534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.378 [2024-11-19 13:19:25.700544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.378 [2024-11-19 13:19:25.700551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.378 [2024-11-19 13:19:25.700558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.378 [2024-11-19 13:19:25.712885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.378 [2024-11-19 13:19:25.713247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.378 [2024-11-19 13:19:25.713266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.378 [2024-11-19 13:19:25.713274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.378 [2024-11-19 13:19:25.713438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.378 [2024-11-19 13:19:25.713601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.378 [2024-11-19 13:19:25.713611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.378 [2024-11-19 13:19:25.713617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.378 [2024-11-19 13:19:25.713624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.378 [2024-11-19 13:19:25.726058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.378 [2024-11-19 13:19:25.726397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.378 [2024-11-19 13:19:25.726416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.378 [2024-11-19 13:19:25.726425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.378 [2024-11-19 13:19:25.726602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.378 [2024-11-19 13:19:25.726779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.378 [2024-11-19 13:19:25.726789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.378 [2024-11-19 13:19:25.726796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.378 [2024-11-19 13:19:25.726803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.378 [2024-11-19 13:19:25.739160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.378 [2024-11-19 13:19:25.739618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.378 [2024-11-19 13:19:25.739636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.378 [2024-11-19 13:19:25.739644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.378 [2024-11-19 13:19:25.739820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.378 [2024-11-19 13:19:25.740008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.378 [2024-11-19 13:19:25.740020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.378 [2024-11-19 13:19:25.740026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.378 [2024-11-19 13:19:25.740033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.639 [2024-11-19 13:19:25.752171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.639 [2024-11-19 13:19:25.752470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.639 [2024-11-19 13:19:25.752488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.639 [2024-11-19 13:19:25.752497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.639 [2024-11-19 13:19:25.752669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.639 [2024-11-19 13:19:25.752840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.639 [2024-11-19 13:19:25.752850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.639 [2024-11-19 13:19:25.752857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.639 [2024-11-19 13:19:25.752864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.639 [2024-11-19 13:19:25.765095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.639 [2024-11-19 13:19:25.765379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.639 [2024-11-19 13:19:25.765397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.639 [2024-11-19 13:19:25.765405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.639 [2024-11-19 13:19:25.765576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.639 [2024-11-19 13:19:25.765748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.639 [2024-11-19 13:19:25.765757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.639 [2024-11-19 13:19:25.765764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.639 [2024-11-19 13:19:25.765771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.640 [2024-11-19 13:19:25.777993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.640 [2024-11-19 13:19:25.778329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-11-19 13:19:25.778347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.640 [2024-11-19 13:19:25.778354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.640 [2024-11-19 13:19:25.778526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.640 [2024-11-19 13:19:25.778697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.640 [2024-11-19 13:19:25.778708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.640 [2024-11-19 13:19:25.778722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.640 [2024-11-19 13:19:25.778730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.640 [2024-11-19 13:19:25.790927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.640 [2024-11-19 13:19:25.791216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-11-19 13:19:25.791233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.640 [2024-11-19 13:19:25.791241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.640 [2024-11-19 13:19:25.791403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.640 [2024-11-19 13:19:25.791566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.640 [2024-11-19 13:19:25.791575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.640 [2024-11-19 13:19:25.791582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.640 [2024-11-19 13:19:25.791589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.640 [2024-11-19 13:19:25.803889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.640 [2024-11-19 13:19:25.804217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-11-19 13:19:25.804235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.640 [2024-11-19 13:19:25.804243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.640 [2024-11-19 13:19:25.804405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.640 [2024-11-19 13:19:25.804568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.640 [2024-11-19 13:19:25.804577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.640 [2024-11-19 13:19:25.804584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.640 [2024-11-19 13:19:25.804590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.640 7329.25 IOPS, 28.63 MiB/s [2024-11-19T12:19:26.017Z] [2024-11-19 13:19:25.816772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.640 [2024-11-19 13:19:25.817055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-11-19 13:19:25.817072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.640 [2024-11-19 13:19:25.817080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.640 [2024-11-19 13:19:25.817243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.640 [2024-11-19 13:19:25.817405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.640 [2024-11-19 13:19:25.817414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.640 [2024-11-19 13:19:25.817421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.640 [2024-11-19 13:19:25.817427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.640 [2024-11-19 13:19:25.829736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.640 [2024-11-19 13:19:25.830161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-11-19 13:19:25.830208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.640 [2024-11-19 13:19:25.830232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.640 [2024-11-19 13:19:25.830744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.640 [2024-11-19 13:19:25.830908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.640 [2024-11-19 13:19:25.830918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.640 [2024-11-19 13:19:25.830925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.640 [2024-11-19 13:19:25.830931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.640 [2024-11-19 13:19:25.842670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.640 [2024-11-19 13:19:25.843065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-11-19 13:19:25.843083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.640 [2024-11-19 13:19:25.843091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.640 [2024-11-19 13:19:25.843272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.640 [2024-11-19 13:19:25.843435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.640 [2024-11-19 13:19:25.843445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.640 [2024-11-19 13:19:25.843451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.640 [2024-11-19 13:19:25.843458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.640 [2024-11-19 13:19:25.855640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.640 [2024-11-19 13:19:25.856054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-11-19 13:19:25.856073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.640 [2024-11-19 13:19:25.856080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.640 [2024-11-19 13:19:25.856242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.640 [2024-11-19 13:19:25.856404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.640 [2024-11-19 13:19:25.856414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.640 [2024-11-19 13:19:25.856420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.640 [2024-11-19 13:19:25.856426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.640 [2024-11-19 13:19:25.868613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.640 [2024-11-19 13:19:25.868983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-11-19 13:19:25.869004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.640 [2024-11-19 13:19:25.869013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.640 [2024-11-19 13:19:25.869185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.640 [2024-11-19 13:19:25.869358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.640 [2024-11-19 13:19:25.869369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.640 [2024-11-19 13:19:25.869376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.640 [2024-11-19 13:19:25.869382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.640 [2024-11-19 13:19:25.881681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.640 [2024-11-19 13:19:25.882071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-11-19 13:19:25.882089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.640 [2024-11-19 13:19:25.882098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.640 [2024-11-19 13:19:25.882270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.640 [2024-11-19 13:19:25.882442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.640 [2024-11-19 13:19:25.882452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.640 [2024-11-19 13:19:25.882459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.640 [2024-11-19 13:19:25.882466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.640 [2024-11-19 13:19:25.894588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.640 [2024-11-19 13:19:25.895025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.640 [2024-11-19 13:19:25.895043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.640 [2024-11-19 13:19:25.895053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.640 [2024-11-19 13:19:25.895226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.640 [2024-11-19 13:19:25.895399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.640 [2024-11-19 13:19:25.895409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.641 [2024-11-19 13:19:25.895416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.641 [2024-11-19 13:19:25.895422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.641 [2024-11-19 13:19:25.907547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.641 [2024-11-19 13:19:25.907886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.641 [2024-11-19 13:19:25.907932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.641 [2024-11-19 13:19:25.907973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.641 [2024-11-19 13:19:25.908561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.641 [2024-11-19 13:19:25.909161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.641 [2024-11-19 13:19:25.909190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.641 [2024-11-19 13:19:25.909213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.641 [2024-11-19 13:19:25.909237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.641 [2024-11-19 13:19:25.920562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.641 [2024-11-19 13:19:25.920894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.641 [2024-11-19 13:19:25.920937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.641 [2024-11-19 13:19:25.920975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.641 [2024-11-19 13:19:25.921552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.641 [2024-11-19 13:19:25.922141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.641 [2024-11-19 13:19:25.922174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.641 [2024-11-19 13:19:25.922181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.641 [2024-11-19 13:19:25.922189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.641 [2024-11-19 13:19:25.933487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.641 [2024-11-19 13:19:25.933879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.641 [2024-11-19 13:19:25.933922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.641 [2024-11-19 13:19:25.933946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.641 [2024-11-19 13:19:25.934388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.641 [2024-11-19 13:19:25.934562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.641 [2024-11-19 13:19:25.934572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.641 [2024-11-19 13:19:25.934580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.641 [2024-11-19 13:19:25.934587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.641 [2024-11-19 13:19:25.946404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.641 [2024-11-19 13:19:25.946779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.641 [2024-11-19 13:19:25.946824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.641 [2024-11-19 13:19:25.946848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.641 [2024-11-19 13:19:25.947325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.641 [2024-11-19 13:19:25.947490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.641 [2024-11-19 13:19:25.947499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.641 [2024-11-19 13:19:25.947509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.641 [2024-11-19 13:19:25.947516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.641 [2024-11-19 13:19:25.959404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.641 [2024-11-19 13:19:25.959827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.641 [2024-11-19 13:19:25.959845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.641 [2024-11-19 13:19:25.959853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.641 [2024-11-19 13:19:25.960020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.641 [2024-11-19 13:19:25.960183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.641 [2024-11-19 13:19:25.960192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.641 [2024-11-19 13:19:25.960199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.641 [2024-11-19 13:19:25.960206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.641 [2024-11-19 13:19:25.972318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.641 [2024-11-19 13:19:25.972642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.641 [2024-11-19 13:19:25.972660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.641 [2024-11-19 13:19:25.972668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.641 [2024-11-19 13:19:25.972830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.641 [2024-11-19 13:19:25.973000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.641 [2024-11-19 13:19:25.973011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.641 [2024-11-19 13:19:25.973018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.641 [2024-11-19 13:19:25.973024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.641 [2024-11-19 13:19:25.985302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.641 [2024-11-19 13:19:25.985709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.641 [2024-11-19 13:19:25.985726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.641 [2024-11-19 13:19:25.985734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.641 [2024-11-19 13:19:25.985898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.641 [2024-11-19 13:19:25.986095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.641 [2024-11-19 13:19:25.986106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.641 [2024-11-19 13:19:25.986113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.641 [2024-11-19 13:19:25.986121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.641 [2024-11-19 13:19:25.998435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.641 [2024-11-19 13:19:25.998877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.641 [2024-11-19 13:19:25.998895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.641 [2024-11-19 13:19:25.998903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.641 [2024-11-19 13:19:25.999086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.641 [2024-11-19 13:19:25.999273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.641 [2024-11-19 13:19:25.999282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.641 [2024-11-19 13:19:25.999289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.641 [2024-11-19 13:19:25.999296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.641 [2024-11-19 13:19:26.011410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.641 [2024-11-19 13:19:26.011754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.641 [2024-11-19 13:19:26.011771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.641 [2024-11-19 13:19:26.011779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.641 [2024-11-19 13:19:26.011954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.641 [2024-11-19 13:19:26.012128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.641 [2024-11-19 13:19:26.012138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.641 [2024-11-19 13:19:26.012145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.641 [2024-11-19 13:19:26.012151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.902 [2024-11-19 13:19:26.024283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.902 [2024-11-19 13:19:26.024656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.902 [2024-11-19 13:19:26.024674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.902 [2024-11-19 13:19:26.024682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.902 [2024-11-19 13:19:26.024854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.902 [2024-11-19 13:19:26.025033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.902 [2024-11-19 13:19:26.025044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.902 [2024-11-19 13:19:26.025051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.902 [2024-11-19 13:19:26.025058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.902 [2024-11-19 13:19:26.037265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.902 [2024-11-19 13:19:26.037680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.902 [2024-11-19 13:19:26.037700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.902 [2024-11-19 13:19:26.037708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.902 [2024-11-19 13:19:26.037870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.902 [2024-11-19 13:19:26.038040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.902 [2024-11-19 13:19:26.038050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.902 [2024-11-19 13:19:26.038057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.902 [2024-11-19 13:19:26.038064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.902 [2024-11-19 13:19:26.050213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.902 [2024-11-19 13:19:26.050655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.902 [2024-11-19 13:19:26.050701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.902 [2024-11-19 13:19:26.050725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.902 [2024-11-19 13:19:26.051191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.902 [2024-11-19 13:19:26.051355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.902 [2024-11-19 13:19:26.051366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.902 [2024-11-19 13:19:26.051372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.902 [2024-11-19 13:19:26.051379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.902 [2024-11-19 13:19:26.063220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.902 [2024-11-19 13:19:26.063622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.902 [2024-11-19 13:19:26.063666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.902 [2024-11-19 13:19:26.063690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.902 [2024-11-19 13:19:26.064203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.902 [2024-11-19 13:19:26.064368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.902 [2024-11-19 13:19:26.064376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.902 [2024-11-19 13:19:26.064382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.902 [2024-11-19 13:19:26.064388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.903 [2024-11-19 13:19:26.076074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.903 [2024-11-19 13:19:26.076492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.903 [2024-11-19 13:19:26.076546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.903 [2024-11-19 13:19:26.076571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.903 [2024-11-19 13:19:26.077168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.903 [2024-11-19 13:19:26.077729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.903 [2024-11-19 13:19:26.077739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.903 [2024-11-19 13:19:26.077745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.903 [2024-11-19 13:19:26.077752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.903 [2024-11-19 13:19:26.091195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.903 [2024-11-19 13:19:26.091719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.903 [2024-11-19 13:19:26.091765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.903 [2024-11-19 13:19:26.091788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.903 [2024-11-19 13:19:26.092365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.903 [2024-11-19 13:19:26.092620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.903 [2024-11-19 13:19:26.092633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.903 [2024-11-19 13:19:26.092643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.903 [2024-11-19 13:19:26.092652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.903 [2024-11-19 13:19:26.104082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.903 [2024-11-19 13:19:26.104444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.903 [2024-11-19 13:19:26.104461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.903 [2024-11-19 13:19:26.104469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.903 [2024-11-19 13:19:26.104635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.903 [2024-11-19 13:19:26.104803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.903 [2024-11-19 13:19:26.104812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.903 [2024-11-19 13:19:26.104819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.903 [2024-11-19 13:19:26.104825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.903 [2024-11-19 13:19:26.116900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.903 [2024-11-19 13:19:26.117291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.903 [2024-11-19 13:19:26.117309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.903 [2024-11-19 13:19:26.117316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.903 [2024-11-19 13:19:26.117479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.903 [2024-11-19 13:19:26.117641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.903 [2024-11-19 13:19:26.117650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.903 [2024-11-19 13:19:26.117661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.903 [2024-11-19 13:19:26.117668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.903 [2024-11-19 13:19:26.129740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.903 [2024-11-19 13:19:26.130141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.903 [2024-11-19 13:19:26.130188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.903 [2024-11-19 13:19:26.130213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.903 [2024-11-19 13:19:26.130666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.903 [2024-11-19 13:19:26.130830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.903 [2024-11-19 13:19:26.130839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.903 [2024-11-19 13:19:26.130846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.903 [2024-11-19 13:19:26.130853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.903 [2024-11-19 13:19:26.142713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.903 [2024-11-19 13:19:26.143057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.903 [2024-11-19 13:19:26.143075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.903 [2024-11-19 13:19:26.143083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.903 [2024-11-19 13:19:26.143245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.903 [2024-11-19 13:19:26.143407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.903 [2024-11-19 13:19:26.143417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.903 [2024-11-19 13:19:26.143423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.903 [2024-11-19 13:19:26.143430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.903 [2024-11-19 13:19:26.155510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.903 [2024-11-19 13:19:26.155961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.903 [2024-11-19 13:19:26.156006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.903 [2024-11-19 13:19:26.156030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.903 [2024-11-19 13:19:26.156479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.903 [2024-11-19 13:19:26.156641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.903 [2024-11-19 13:19:26.156649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.903 [2024-11-19 13:19:26.156656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.903 [2024-11-19 13:19:26.156662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.903 [2024-11-19 13:19:26.168362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.903 [2024-11-19 13:19:26.168786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.903 [2024-11-19 13:19:26.168830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.903 [2024-11-19 13:19:26.168855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.903 [2024-11-19 13:19:26.169415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.903 [2024-11-19 13:19:26.169579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.903 [2024-11-19 13:19:26.169587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.903 [2024-11-19 13:19:26.169593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.903 [2024-11-19 13:19:26.169599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.903 [2024-11-19 13:19:26.181167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.903 [2024-11-19 13:19:26.181584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.903 [2024-11-19 13:19:26.181601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.903 [2024-11-19 13:19:26.181608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.903 [2024-11-19 13:19:26.181770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.903 [2024-11-19 13:19:26.181933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.903 [2024-11-19 13:19:26.181943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.903 [2024-11-19 13:19:26.181957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.903 [2024-11-19 13:19:26.181963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.903 [2024-11-19 13:19:26.194035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.903 [2024-11-19 13:19:26.194442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.903 [2024-11-19 13:19:26.194460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.903 [2024-11-19 13:19:26.194468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.903 [2024-11-19 13:19:26.194629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.903 [2024-11-19 13:19:26.194792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.903 [2024-11-19 13:19:26.194801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.903 [2024-11-19 13:19:26.194808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.904 [2024-11-19 13:19:26.194814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.904 [2024-11-19 13:19:26.206937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.904 [2024-11-19 13:19:26.207289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.904 [2024-11-19 13:19:26.207342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.904 [2024-11-19 13:19:26.207366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.904 [2024-11-19 13:19:26.207890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.904 [2024-11-19 13:19:26.208080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.904 [2024-11-19 13:19:26.208091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.904 [2024-11-19 13:19:26.208097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.904 [2024-11-19 13:19:26.208104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.904 [2024-11-19 13:19:26.219772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.904 [2024-11-19 13:19:26.220185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.904 [2024-11-19 13:19:26.220202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.904 [2024-11-19 13:19:26.220210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.904 [2024-11-19 13:19:26.220374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.904 [2024-11-19 13:19:26.220536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.904 [2024-11-19 13:19:26.220546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.904 [2024-11-19 13:19:26.220553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.904 [2024-11-19 13:19:26.220560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.904 [2024-11-19 13:19:26.232640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.904 [2024-11-19 13:19:26.233033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.904 [2024-11-19 13:19:26.233051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.904 [2024-11-19 13:19:26.233059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.904 [2024-11-19 13:19:26.233222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.904 [2024-11-19 13:19:26.233385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.904 [2024-11-19 13:19:26.233395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.904 [2024-11-19 13:19:26.233401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.904 [2024-11-19 13:19:26.233408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.904 [2024-11-19 13:19:26.245532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.904 [2024-11-19 13:19:26.245970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.904 [2024-11-19 13:19:26.245987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.904 [2024-11-19 13:19:26.245996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.904 [2024-11-19 13:19:26.246192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.904 [2024-11-19 13:19:26.246366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.904 [2024-11-19 13:19:26.246375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.904 [2024-11-19 13:19:26.246383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.904 [2024-11-19 13:19:26.246389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.904 [2024-11-19 13:19:26.258567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.904 [2024-11-19 13:19:26.258979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.904 [2024-11-19 13:19:26.258998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.904 [2024-11-19 13:19:26.259007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.904 [2024-11-19 13:19:26.259184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.904 [2024-11-19 13:19:26.259363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.904 [2024-11-19 13:19:26.259373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.904 [2024-11-19 13:19:26.259380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.904 [2024-11-19 13:19:26.259387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:22.904 [2024-11-19 13:19:26.271506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:22.904 [2024-11-19 13:19:26.271906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.904 [2024-11-19 13:19:26.271924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:22.904 [2024-11-19 13:19:26.271933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:22.904 [2024-11-19 13:19:26.272111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:22.904 [2024-11-19 13:19:26.272284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:22.904 [2024-11-19 13:19:26.272294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:22.904 [2024-11-19 13:19:26.272301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:22.904 [2024-11-19 13:19:26.272307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.165 [2024-11-19 13:19:26.284587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.165 [2024-11-19 13:19:26.285025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.165 [2024-11-19 13:19:26.285044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.165 [2024-11-19 13:19:26.285053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.165 [2024-11-19 13:19:26.285237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.165 [2024-11-19 13:19:26.285411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.165 [2024-11-19 13:19:26.285421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.165 [2024-11-19 13:19:26.285432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.165 [2024-11-19 13:19:26.285440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.165 [2024-11-19 13:19:26.297659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.165 [2024-11-19 13:19:26.298096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.165 [2024-11-19 13:19:26.298143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.165 [2024-11-19 13:19:26.298167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.165 [2024-11-19 13:19:26.298743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.165 [2024-11-19 13:19:26.299138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.165 [2024-11-19 13:19:26.299148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.165 [2024-11-19 13:19:26.299155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.165 [2024-11-19 13:19:26.299162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.165 [2024-11-19 13:19:26.310451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.165 [2024-11-19 13:19:26.310875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.165 [2024-11-19 13:19:26.310921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.165 [2024-11-19 13:19:26.310945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.165 [2024-11-19 13:19:26.311543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.165 [2024-11-19 13:19:26.312142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.165 [2024-11-19 13:19:26.312153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.165 [2024-11-19 13:19:26.312159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.165 [2024-11-19 13:19:26.312166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.165 [2024-11-19 13:19:26.323349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.165 [2024-11-19 13:19:26.323771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.165 [2024-11-19 13:19:26.323816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.165 [2024-11-19 13:19:26.323841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.165 [2024-11-19 13:19:26.324324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.165 [2024-11-19 13:19:26.324498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.165 [2024-11-19 13:19:26.324508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.165 [2024-11-19 13:19:26.324515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.165 [2024-11-19 13:19:26.324521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.165 [2024-11-19 13:19:26.336248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.165 [2024-11-19 13:19:26.336641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.165 [2024-11-19 13:19:26.336659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.165 [2024-11-19 13:19:26.336667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.165 [2024-11-19 13:19:26.336829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.165 [2024-11-19 13:19:26.337016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.165 [2024-11-19 13:19:26.337026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.165 [2024-11-19 13:19:26.337033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.165 [2024-11-19 13:19:26.337041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.165 [2024-11-19 13:19:26.349032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.165 [2024-11-19 13:19:26.349447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.165 [2024-11-19 13:19:26.349464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.165 [2024-11-19 13:19:26.349472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.165 [2024-11-19 13:19:26.349634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.165 [2024-11-19 13:19:26.349797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.165 [2024-11-19 13:19:26.349806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.165 [2024-11-19 13:19:26.349813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.165 [2024-11-19 13:19:26.349819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.165 [2024-11-19 13:19:26.361897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.165 [2024-11-19 13:19:26.362341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.165 [2024-11-19 13:19:26.362360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.165 [2024-11-19 13:19:26.362368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.165 [2024-11-19 13:19:26.362540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.165 [2024-11-19 13:19:26.362712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.166 [2024-11-19 13:19:26.362723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.166 [2024-11-19 13:19:26.362729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.166 [2024-11-19 13:19:26.362737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.166 [2024-11-19 13:19:26.374683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.166 [2024-11-19 13:19:26.375102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.166 [2024-11-19 13:19:26.375156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.166 [2024-11-19 13:19:26.375181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.166 [2024-11-19 13:19:26.375575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.166 [2024-11-19 13:19:26.375740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.166 [2024-11-19 13:19:26.375750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.166 [2024-11-19 13:19:26.375756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.166 [2024-11-19 13:19:26.375763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.166 [2024-11-19 13:19:26.387686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.166 [2024-11-19 13:19:26.388067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.166 [2024-11-19 13:19:26.388086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.166 [2024-11-19 13:19:26.388095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.166 [2024-11-19 13:19:26.388272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.166 [2024-11-19 13:19:26.388452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.166 [2024-11-19 13:19:26.388462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.166 [2024-11-19 13:19:26.388469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.166 [2024-11-19 13:19:26.388476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.166 [2024-11-19 13:19:26.400709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.166 [2024-11-19 13:19:26.401147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.166 [2024-11-19 13:19:26.401192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.166 [2024-11-19 13:19:26.401216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.166 [2024-11-19 13:19:26.401795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.166 [2024-11-19 13:19:26.402309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.166 [2024-11-19 13:19:26.402319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.166 [2024-11-19 13:19:26.402326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.166 [2024-11-19 13:19:26.402332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.166 [2024-11-19 13:19:26.413606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.166 [2024-11-19 13:19:26.414016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.166 [2024-11-19 13:19:26.414034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.166 [2024-11-19 13:19:26.414042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.166 [2024-11-19 13:19:26.414204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.166 [2024-11-19 13:19:26.414370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.166 [2024-11-19 13:19:26.414379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.166 [2024-11-19 13:19:26.414386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.166 [2024-11-19 13:19:26.414393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.166 [2024-11-19 13:19:26.426476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.166 [2024-11-19 13:19:26.426886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.166 [2024-11-19 13:19:26.426903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.166 [2024-11-19 13:19:26.426911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.166 [2024-11-19 13:19:26.427101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.166 [2024-11-19 13:19:26.427275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.166 [2024-11-19 13:19:26.427285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.166 [2024-11-19 13:19:26.427293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.166 [2024-11-19 13:19:26.427299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.166 [2024-11-19 13:19:26.439404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.166 [2024-11-19 13:19:26.439761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.166 [2024-11-19 13:19:26.439778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.166 [2024-11-19 13:19:26.439785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.166 [2024-11-19 13:19:26.439955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.166 [2024-11-19 13:19:26.440140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.166 [2024-11-19 13:19:26.440150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.166 [2024-11-19 13:19:26.440156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.166 [2024-11-19 13:19:26.440163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.166 [2024-11-19 13:19:26.452177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.166 [2024-11-19 13:19:26.452590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.166 [2024-11-19 13:19:26.452630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.166 [2024-11-19 13:19:26.452656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.166 [2024-11-19 13:19:26.453257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.166 [2024-11-19 13:19:26.453422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.166 [2024-11-19 13:19:26.453432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.166 [2024-11-19 13:19:26.453443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.166 [2024-11-19 13:19:26.453449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.166 [2024-11-19 13:19:26.465129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.166 [2024-11-19 13:19:26.465566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.166 [2024-11-19 13:19:26.465611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.166 [2024-11-19 13:19:26.465634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.166 [2024-11-19 13:19:26.466222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.166 [2024-11-19 13:19:26.466420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.166 [2024-11-19 13:19:26.466430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.166 [2024-11-19 13:19:26.466437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.166 [2024-11-19 13:19:26.466444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.166 [2024-11-19 13:19:26.478050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.166 [2024-11-19 13:19:26.478462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.166 [2024-11-19 13:19:26.478507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.166 [2024-11-19 13:19:26.478532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.166 [2024-11-19 13:19:26.479122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.166 [2024-11-19 13:19:26.479572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.166 [2024-11-19 13:19:26.479583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.166 [2024-11-19 13:19:26.479590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.166 [2024-11-19 13:19:26.479597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.166 [2024-11-19 13:19:26.490834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.166 [2024-11-19 13:19:26.491355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.166 [2024-11-19 13:19:26.491375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.166 [2024-11-19 13:19:26.491384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.166 [2024-11-19 13:19:26.491546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.167 [2024-11-19 13:19:26.491710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.167 [2024-11-19 13:19:26.491719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.167 [2024-11-19 13:19:26.491726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.167 [2024-11-19 13:19:26.491732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.167 [2024-11-19 13:19:26.503746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.167 [2024-11-19 13:19:26.504079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.167 [2024-11-19 13:19:26.504097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.167 [2024-11-19 13:19:26.504105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.167 [2024-11-19 13:19:26.504268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.167 [2024-11-19 13:19:26.504431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.167 [2024-11-19 13:19:26.504440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.167 [2024-11-19 13:19:26.504447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.167 [2024-11-19 13:19:26.504454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.167 [2024-11-19 13:19:26.516941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.167 [2024-11-19 13:19:26.517389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.167 [2024-11-19 13:19:26.517435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.167 [2024-11-19 13:19:26.517459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.167 [2024-11-19 13:19:26.518050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.167 [2024-11-19 13:19:26.518632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.167 [2024-11-19 13:19:26.518659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.167 [2024-11-19 13:19:26.518685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.167 [2024-11-19 13:19:26.518693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.167 [2024-11-19 13:19:26.529826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.167 [2024-11-19 13:19:26.530268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.167 [2024-11-19 13:19:26.530314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.167 [2024-11-19 13:19:26.530338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.167 [2024-11-19 13:19:26.530541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.167 [2024-11-19 13:19:26.530705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.167 [2024-11-19 13:19:26.530715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.167 [2024-11-19 13:19:26.530722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.167 [2024-11-19 13:19:26.530729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.427 [2024-11-19 13:19:26.542797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.427 [2024-11-19 13:19:26.543206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.427 [2024-11-19 13:19:26.543250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.427 [2024-11-19 13:19:26.543281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.427 [2024-11-19 13:19:26.543722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.427 [2024-11-19 13:19:26.543895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.427 [2024-11-19 13:19:26.543903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.427 [2024-11-19 13:19:26.543909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.427 [2024-11-19 13:19:26.543915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.427 [2024-11-19 13:19:26.555635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.427 [2024-11-19 13:19:26.556043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.427 [2024-11-19 13:19:26.556060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.427 [2024-11-19 13:19:26.556068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.427 [2024-11-19 13:19:26.556230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.427 [2024-11-19 13:19:26.556393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.427 [2024-11-19 13:19:26.556403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.427 [2024-11-19 13:19:26.556409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.427 [2024-11-19 13:19:26.556415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.427 [2024-11-19 13:19:26.568423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.427 [2024-11-19 13:19:26.568855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.427 [2024-11-19 13:19:26.568900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.427 [2024-11-19 13:19:26.568924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.427 [2024-11-19 13:19:26.569516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.427 [2024-11-19 13:19:26.569983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.427 [2024-11-19 13:19:26.569994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.427 [2024-11-19 13:19:26.570001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.427 [2024-11-19 13:19:26.570009] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.427 [2024-11-19 13:19:26.581253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.427 [2024-11-19 13:19:26.581656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.427 [2024-11-19 13:19:26.581702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.427 [2024-11-19 13:19:26.581726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.427 [2024-11-19 13:19:26.582320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.427 [2024-11-19 13:19:26.582875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.428 [2024-11-19 13:19:26.582885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.428 [2024-11-19 13:19:26.582892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.428 [2024-11-19 13:19:26.582899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.428 [2024-11-19 13:19:26.594099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.428 [2024-11-19 13:19:26.594461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.428 [2024-11-19 13:19:26.594506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.428 [2024-11-19 13:19:26.594530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.428 [2024-11-19 13:19:26.595121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.428 [2024-11-19 13:19:26.595705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.428 [2024-11-19 13:19:26.595731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.428 [2024-11-19 13:19:26.595753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.428 [2024-11-19 13:19:26.595773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.428 [2024-11-19 13:19:26.606906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.428 [2024-11-19 13:19:26.607323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.428 [2024-11-19 13:19:26.607370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.428 [2024-11-19 13:19:26.607394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.428 [2024-11-19 13:19:26.607876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.428 [2024-11-19 13:19:26.608066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.428 [2024-11-19 13:19:26.608075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.428 [2024-11-19 13:19:26.608082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.428 [2024-11-19 13:19:26.608089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.428 [2024-11-19 13:19:26.619737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.428 [2024-11-19 13:19:26.620154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.428 [2024-11-19 13:19:26.620199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.428 [2024-11-19 13:19:26.620224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.428 [2024-11-19 13:19:26.620775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.428 [2024-11-19 13:19:26.620938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.428 [2024-11-19 13:19:26.620955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.428 [2024-11-19 13:19:26.620966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.428 [2024-11-19 13:19:26.620973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.428 [2024-11-19 13:19:26.632584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.428 [2024-11-19 13:19:26.633012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.428 [2024-11-19 13:19:26.633059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.428 [2024-11-19 13:19:26.633083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.428 [2024-11-19 13:19:26.633660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.428 [2024-11-19 13:19:26.634192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.428 [2024-11-19 13:19:26.634212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.428 [2024-11-19 13:19:26.634226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.428 [2024-11-19 13:19:26.634241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.428 [2024-11-19 13:19:26.647432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.428 [2024-11-19 13:19:26.647923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.428 [2024-11-19 13:19:26.647952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.428 [2024-11-19 13:19:26.647964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.428 [2024-11-19 13:19:26.648216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.428 [2024-11-19 13:19:26.648470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.428 [2024-11-19 13:19:26.648483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.428 [2024-11-19 13:19:26.648494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.428 [2024-11-19 13:19:26.648504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.428 [2024-11-19 13:19:26.660605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.428 [2024-11-19 13:19:26.661041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.428 [2024-11-19 13:19:26.661059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.428 [2024-11-19 13:19:26.661067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.428 [2024-11-19 13:19:26.661250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.428 [2024-11-19 13:19:26.661423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.428 [2024-11-19 13:19:26.661433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.428 [2024-11-19 13:19:26.661441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.428 [2024-11-19 13:19:26.661449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.428 [2024-11-19 13:19:26.673749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.428 [2024-11-19 13:19:26.674153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.428 [2024-11-19 13:19:26.674171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.428 [2024-11-19 13:19:26.674180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.428 [2024-11-19 13:19:26.674357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.428 [2024-11-19 13:19:26.674536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.428 [2024-11-19 13:19:26.674547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.428 [2024-11-19 13:19:26.674554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.428 [2024-11-19 13:19:26.674560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.428 [2024-11-19 13:19:26.686868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.428 [2024-11-19 13:19:26.687224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.428 [2024-11-19 13:19:26.687242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.428 [2024-11-19 13:19:26.687251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.428 [2024-11-19 13:19:26.687428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.428 [2024-11-19 13:19:26.687606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.428 [2024-11-19 13:19:26.687616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.428 [2024-11-19 13:19:26.687623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.428 [2024-11-19 13:19:26.687630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.428 [2024-11-19 13:19:26.699968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.428 [2024-11-19 13:19:26.700396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.428 [2024-11-19 13:19:26.700414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.428 [2024-11-19 13:19:26.700422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.428 [2024-11-19 13:19:26.700599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.428 [2024-11-19 13:19:26.700781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.428 [2024-11-19 13:19:26.700792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.428 [2024-11-19 13:19:26.700799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.428 [2024-11-19 13:19:26.700805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.428 [2024-11-19 13:19:26.712980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.428 [2024-11-19 13:19:26.713360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.428 [2024-11-19 13:19:26.713378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.428 [2024-11-19 13:19:26.713389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.428 [2024-11-19 13:19:26.713552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.429 [2024-11-19 13:19:26.713716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.429 [2024-11-19 13:19:26.713725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.429 [2024-11-19 13:19:26.713731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.429 [2024-11-19 13:19:26.713738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.429 [2024-11-19 13:19:26.725896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.429 [2024-11-19 13:19:26.726322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.429 [2024-11-19 13:19:26.726340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.429 [2024-11-19 13:19:26.726348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.429 [2024-11-19 13:19:26.726511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.429 [2024-11-19 13:19:26.726673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.429 [2024-11-19 13:19:26.726683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.429 [2024-11-19 13:19:26.726689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.429 [2024-11-19 13:19:26.726696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.429 [2024-11-19 13:19:26.738691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.429 [2024-11-19 13:19:26.739121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.429 [2024-11-19 13:19:26.739169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.429 [2024-11-19 13:19:26.739194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.429 [2024-11-19 13:19:26.739773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.429 [2024-11-19 13:19:26.740060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.429 [2024-11-19 13:19:26.740070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.429 [2024-11-19 13:19:26.740077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.429 [2024-11-19 13:19:26.740084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.429 [2024-11-19 13:19:26.751505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.429 [2024-11-19 13:19:26.751918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.429 [2024-11-19 13:19:26.751935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.429 [2024-11-19 13:19:26.751943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.429 [2024-11-19 13:19:26.752136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.429 [2024-11-19 13:19:26.752312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.429 [2024-11-19 13:19:26.752322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.429 [2024-11-19 13:19:26.752329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.429 [2024-11-19 13:19:26.752336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.429 [2024-11-19 13:19:26.764396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.429 [2024-11-19 13:19:26.764798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.429 [2024-11-19 13:19:26.764844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.429 [2024-11-19 13:19:26.764868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.429 [2024-11-19 13:19:26.765376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.429 [2024-11-19 13:19:26.765549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.429 [2024-11-19 13:19:26.765558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.429 [2024-11-19 13:19:26.765565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.429 [2024-11-19 13:19:26.765584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.429 [2024-11-19 13:19:26.779256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.429 [2024-11-19 13:19:26.779784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.429 [2024-11-19 13:19:26.779808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.429 [2024-11-19 13:19:26.779819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.429 [2024-11-19 13:19:26.780082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.429 [2024-11-19 13:19:26.780338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.429 [2024-11-19 13:19:26.780351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.429 [2024-11-19 13:19:26.780361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.429 [2024-11-19 13:19:26.780371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.429 [2024-11-19 13:19:26.792295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.429 [2024-11-19 13:19:26.792736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.429 [2024-11-19 13:19:26.792782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.429 [2024-11-19 13:19:26.792806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.429 [2024-11-19 13:19:26.793317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.429 [2024-11-19 13:19:26.793491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.429 [2024-11-19 13:19:26.793501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.429 [2024-11-19 13:19:26.793511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.429 [2024-11-19 13:19:26.793519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.690 [2024-11-19 13:19:26.805243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.690 [2024-11-19 13:19:26.805683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-11-19 13:19:26.805700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.690 [2024-11-19 13:19:26.805708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.690 [2024-11-19 13:19:26.805870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.690 [2024-11-19 13:19:26.806058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.690 [2024-11-19 13:19:26.806068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.690 [2024-11-19 13:19:26.806075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.690 [2024-11-19 13:19:26.806082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.690 5863.40 IOPS, 22.90 MiB/s [2024-11-19T12:19:27.067Z] [2024-11-19 13:19:26.818044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.690 [2024-11-19 13:19:26.818472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-11-19 13:19:26.818519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.690 [2024-11-19 13:19:26.818543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.690 [2024-11-19 13:19:26.819134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.690 [2024-11-19 13:19:26.819502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.690 [2024-11-19 13:19:26.819512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.690 [2024-11-19 13:19:26.819519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.690 [2024-11-19 13:19:26.819525] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.690 [2024-11-19 13:19:26.830916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.690 [2024-11-19 13:19:26.831339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-11-19 13:19:26.831391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.690 [2024-11-19 13:19:26.831416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.690 [2024-11-19 13:19:26.832008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.690 [2024-11-19 13:19:26.832296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.690 [2024-11-19 13:19:26.832306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.690 [2024-11-19 13:19:26.832313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.690 [2024-11-19 13:19:26.832320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.690 [2024-11-19 13:19:26.843751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.690 [2024-11-19 13:19:26.844081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-11-19 13:19:26.844099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.690 [2024-11-19 13:19:26.844106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.690 [2024-11-19 13:19:26.844268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.690 [2024-11-19 13:19:26.844431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.690 [2024-11-19 13:19:26.844441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.690 [2024-11-19 13:19:26.844447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.690 [2024-11-19 13:19:26.844454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.690 [2024-11-19 13:19:26.856633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.690 [2024-11-19 13:19:26.857047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-11-19 13:19:26.857065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.690 [2024-11-19 13:19:26.857073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.690 [2024-11-19 13:19:26.857235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.690 [2024-11-19 13:19:26.857398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.690 [2024-11-19 13:19:26.857407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.690 [2024-11-19 13:19:26.857414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.690 [2024-11-19 13:19:26.857420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.690 [2024-11-19 13:19:26.869433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.690 [2024-11-19 13:19:26.869853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-11-19 13:19:26.869894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.690 [2024-11-19 13:19:26.869920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.690 [2024-11-19 13:19:26.870451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.690 [2024-11-19 13:19:26.870624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.690 [2024-11-19 13:19:26.870635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.690 [2024-11-19 13:19:26.870641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.690 [2024-11-19 13:19:26.870647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.690 [2024-11-19 13:19:26.882277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.690 [2024-11-19 13:19:26.882705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-11-19 13:19:26.882730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.690 [2024-11-19 13:19:26.882738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.690 [2024-11-19 13:19:26.882901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.690 [2024-11-19 13:19:26.883092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.690 [2024-11-19 13:19:26.883102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.690 [2024-11-19 13:19:26.883109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.690 [2024-11-19 13:19:26.883116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.690 [2024-11-19 13:19:26.895215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.690 [2024-11-19 13:19:26.895561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-11-19 13:19:26.895578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.690 [2024-11-19 13:19:26.895586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.690 [2024-11-19 13:19:26.895750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.690 [2024-11-19 13:19:26.895912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.690 [2024-11-19 13:19:26.895922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.690 [2024-11-19 13:19:26.895929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.690 [2024-11-19 13:19:26.895935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.690 [2024-11-19 13:19:26.908015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.690 [2024-11-19 13:19:26.908351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.690 [2024-11-19 13:19:26.908368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.691 [2024-11-19 13:19:26.908376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.691 [2024-11-19 13:19:26.908538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.691 [2024-11-19 13:19:26.908700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.691 [2024-11-19 13:19:26.908709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.691 [2024-11-19 13:19:26.908715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.691 [2024-11-19 13:19:26.908722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.691 [2024-11-19 13:19:26.920798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.691 [2024-11-19 13:19:26.921215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-11-19 13:19:26.921233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.691 [2024-11-19 13:19:26.921240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.691 [2024-11-19 13:19:26.921406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.691 [2024-11-19 13:19:26.921569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.691 [2024-11-19 13:19:26.921578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.691 [2024-11-19 13:19:26.921585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.691 [2024-11-19 13:19:26.921592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.691 [2024-11-19 13:19:26.933584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.691 [2024-11-19 13:19:26.934004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-11-19 13:19:26.934022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.691 [2024-11-19 13:19:26.934029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.691 [2024-11-19 13:19:26.934192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.691 [2024-11-19 13:19:26.934355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.691 [2024-11-19 13:19:26.934364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.691 [2024-11-19 13:19:26.934370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.691 [2024-11-19 13:19:26.934377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.691 [2024-11-19 13:19:26.946502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.691 [2024-11-19 13:19:26.946852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-11-19 13:19:26.946870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.691 [2024-11-19 13:19:26.946878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.691 [2024-11-19 13:19:26.947065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.691 [2024-11-19 13:19:26.947238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.691 [2024-11-19 13:19:26.947247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.691 [2024-11-19 13:19:26.947254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.691 [2024-11-19 13:19:26.947261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.691 [2024-11-19 13:19:26.959504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.691 [2024-11-19 13:19:26.959859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-11-19 13:19:26.959876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.691 [2024-11-19 13:19:26.959884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.691 [2024-11-19 13:19:26.960061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.691 [2024-11-19 13:19:26.960242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.691 [2024-11-19 13:19:26.960252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.691 [2024-11-19 13:19:26.960262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.691 [2024-11-19 13:19:26.960269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.691 [2024-11-19 13:19:26.972340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.691 [2024-11-19 13:19:26.972678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-11-19 13:19:26.972695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.691 [2024-11-19 13:19:26.972703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.691 [2024-11-19 13:19:26.972875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.691 [2024-11-19 13:19:26.973053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.691 [2024-11-19 13:19:26.973064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.691 [2024-11-19 13:19:26.973071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.691 [2024-11-19 13:19:26.973078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.691 [2024-11-19 13:19:26.985372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.691 [2024-11-19 13:19:26.985817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-11-19 13:19:26.985863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.691 [2024-11-19 13:19:26.985887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.691 [2024-11-19 13:19:26.986475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.691 [2024-11-19 13:19:26.986940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.691 [2024-11-19 13:19:26.986960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.691 [2024-11-19 13:19:26.986967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.691 [2024-11-19 13:19:26.986976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.691 [2024-11-19 13:19:26.998401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.691 [2024-11-19 13:19:26.998826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-11-19 13:19:26.998844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.691 [2024-11-19 13:19:26.998852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.691 [2024-11-19 13:19:26.999029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.691 [2024-11-19 13:19:26.999203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.691 [2024-11-19 13:19:26.999213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.691 [2024-11-19 13:19:26.999220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.691 [2024-11-19 13:19:26.999227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.691 [2024-11-19 13:19:27.011247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.691 [2024-11-19 13:19:27.011599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-11-19 13:19:27.011617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.691 [2024-11-19 13:19:27.011625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.691 [2024-11-19 13:19:27.011797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.691 [2024-11-19 13:19:27.011974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.691 [2024-11-19 13:19:27.011984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.691 [2024-11-19 13:19:27.011991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.691 [2024-11-19 13:19:27.011998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.691 [2024-11-19 13:19:27.024293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.691 [2024-11-19 13:19:27.024675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.691 [2024-11-19 13:19:27.024693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.691 [2024-11-19 13:19:27.024700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.691 [2024-11-19 13:19:27.024871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.691 [2024-11-19 13:19:27.025047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.691 [2024-11-19 13:19:27.025058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.691 [2024-11-19 13:19:27.025066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.691 [2024-11-19 13:19:27.025073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.691 [2024-11-19 13:19:27.037373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.691 [2024-11-19 13:19:27.037721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-11-19 13:19:27.037738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.692 [2024-11-19 13:19:27.037746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.692 [2024-11-19 13:19:27.037923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.692 [2024-11-19 13:19:27.038105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.692 [2024-11-19 13:19:27.038116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.692 [2024-11-19 13:19:27.038123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.692 [2024-11-19 13:19:27.038129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.692 [2024-11-19 13:19:27.050321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.692 [2024-11-19 13:19:27.050711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.692 [2024-11-19 13:19:27.050734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.692 [2024-11-19 13:19:27.050742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.692 [2024-11-19 13:19:27.050906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.692 [2024-11-19 13:19:27.051095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.692 [2024-11-19 13:19:27.051106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.692 [2024-11-19 13:19:27.051112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.692 [2024-11-19 13:19:27.051122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.692 [2024-11-19 13:19:27.063351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.952 [2024-11-19 13:19:27.063780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.952 [2024-11-19 13:19:27.063827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.952 [2024-11-19 13:19:27.063851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.952 [2024-11-19 13:19:27.064443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.952 [2024-11-19 13:19:27.065026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.952 [2024-11-19 13:19:27.065036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.952 [2024-11-19 13:19:27.065043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.952 [2024-11-19 13:19:27.065050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.952 [2024-11-19 13:19:27.076321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.952 [2024-11-19 13:19:27.076727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.952 [2024-11-19 13:19:27.076745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.952 [2024-11-19 13:19:27.076754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.952 [2024-11-19 13:19:27.076931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.953 [2024-11-19 13:19:27.077113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.953 [2024-11-19 13:19:27.077123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.953 [2024-11-19 13:19:27.077131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.953 [2024-11-19 13:19:27.077138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.953 [2024-11-19 13:19:27.089469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.953 [2024-11-19 13:19:27.089878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.953 [2024-11-19 13:19:27.089895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.953 [2024-11-19 13:19:27.089904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.953 [2024-11-19 13:19:27.090090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.953 [2024-11-19 13:19:27.090268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.953 [2024-11-19 13:19:27.090278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.953 [2024-11-19 13:19:27.090285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.953 [2024-11-19 13:19:27.090292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.953 [2024-11-19 13:19:27.102617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.953 [2024-11-19 13:19:27.103048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.953 [2024-11-19 13:19:27.103067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.953 [2024-11-19 13:19:27.103076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.953 [2024-11-19 13:19:27.103252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.953 [2024-11-19 13:19:27.103430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.953 [2024-11-19 13:19:27.103440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.953 [2024-11-19 13:19:27.103447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.953 [2024-11-19 13:19:27.103453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.953 [2024-11-19 13:19:27.115808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.953 [2024-11-19 13:19:27.116225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.953 [2024-11-19 13:19:27.116244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.953 [2024-11-19 13:19:27.116252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.953 [2024-11-19 13:19:27.116429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.953 [2024-11-19 13:19:27.116607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.953 [2024-11-19 13:19:27.116618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.953 [2024-11-19 13:19:27.116625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.953 [2024-11-19 13:19:27.116632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.953 [2024-11-19 13:19:27.128955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.953 [2024-11-19 13:19:27.129385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.953 [2024-11-19 13:19:27.129403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.953 [2024-11-19 13:19:27.129412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.953 [2024-11-19 13:19:27.129588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.953 [2024-11-19 13:19:27.129766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.953 [2024-11-19 13:19:27.129775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.953 [2024-11-19 13:19:27.129790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.953 [2024-11-19 13:19:27.129797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.953 [2024-11-19 13:19:27.142120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.953 [2024-11-19 13:19:27.142492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.953 [2024-11-19 13:19:27.142509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.953 [2024-11-19 13:19:27.142517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.953 [2024-11-19 13:19:27.142692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.953 [2024-11-19 13:19:27.142869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.953 [2024-11-19 13:19:27.142879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.953 [2024-11-19 13:19:27.142886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.953 [2024-11-19 13:19:27.142892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.953 [2024-11-19 13:19:27.155221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.953 [2024-11-19 13:19:27.155576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.953 [2024-11-19 13:19:27.155594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.953 [2024-11-19 13:19:27.155602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.953 [2024-11-19 13:19:27.155779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.953 [2024-11-19 13:19:27.155964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.953 [2024-11-19 13:19:27.155974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.953 [2024-11-19 13:19:27.155983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.953 [2024-11-19 13:19:27.155990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.953 [2024-11-19 13:19:27.168313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.953 [2024-11-19 13:19:27.168742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.953 [2024-11-19 13:19:27.168760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.953 [2024-11-19 13:19:27.168768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.953 [2024-11-19 13:19:27.168946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.953 [2024-11-19 13:19:27.169129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.953 [2024-11-19 13:19:27.169139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.953 [2024-11-19 13:19:27.169146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.953 [2024-11-19 13:19:27.169153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.953 [2024-11-19 13:19:27.181484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.953 [2024-11-19 13:19:27.181916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.953 [2024-11-19 13:19:27.181934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.953 [2024-11-19 13:19:27.181942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.953 [2024-11-19 13:19:27.182124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.953 [2024-11-19 13:19:27.182303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.953 [2024-11-19 13:19:27.182313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.953 [2024-11-19 13:19:27.182320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.953 [2024-11-19 13:19:27.182327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.953 [2024-11-19 13:19:27.194653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.953 [2024-11-19 13:19:27.195068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.953 [2024-11-19 13:19:27.195087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.953 [2024-11-19 13:19:27.195096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.953 [2024-11-19 13:19:27.195273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.953 [2024-11-19 13:19:27.195450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.953 [2024-11-19 13:19:27.195461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.953 [2024-11-19 13:19:27.195467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.953 [2024-11-19 13:19:27.195474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.953 [2024-11-19 13:19:27.207748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.953 [2024-11-19 13:19:27.208183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.953 [2024-11-19 13:19:27.208202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.954 [2024-11-19 13:19:27.208210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.954 [2024-11-19 13:19:27.208386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.954 [2024-11-19 13:19:27.208564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.954 [2024-11-19 13:19:27.208573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.954 [2024-11-19 13:19:27.208581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.954 [2024-11-19 13:19:27.208588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.954 [2024-11-19 13:19:27.220910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.954 [2024-11-19 13:19:27.221276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.954 [2024-11-19 13:19:27.221297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.954 [2024-11-19 13:19:27.221305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.954 [2024-11-19 13:19:27.221483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.954 [2024-11-19 13:19:27.221659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.954 [2024-11-19 13:19:27.221669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.954 [2024-11-19 13:19:27.221676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.954 [2024-11-19 13:19:27.221684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.954 [2024-11-19 13:19:27.234098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.954 [2024-11-19 13:19:27.234532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.954 [2024-11-19 13:19:27.234550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.954 [2024-11-19 13:19:27.234558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.954 [2024-11-19 13:19:27.234734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.954 [2024-11-19 13:19:27.234913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.954 [2024-11-19 13:19:27.234922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.954 [2024-11-19 13:19:27.234929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.954 [2024-11-19 13:19:27.234936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.954 [2024-11-19 13:19:27.247408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.954 [2024-11-19 13:19:27.247766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.954 [2024-11-19 13:19:27.247784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.954 [2024-11-19 13:19:27.247792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.954 [2024-11-19 13:19:27.247975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.954 [2024-11-19 13:19:27.248154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.954 [2024-11-19 13:19:27.248164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.954 [2024-11-19 13:19:27.248171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.954 [2024-11-19 13:19:27.248177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.954 [2024-11-19 13:19:27.260505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.954 [2024-11-19 13:19:27.260930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.954 [2024-11-19 13:19:27.260953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.954 [2024-11-19 13:19:27.260961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.954 [2024-11-19 13:19:27.261141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.954 [2024-11-19 13:19:27.261319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.954 [2024-11-19 13:19:27.261329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.954 [2024-11-19 13:19:27.261336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.954 [2024-11-19 13:19:27.261343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.954 [2024-11-19 13:19:27.273666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.954 [2024-11-19 13:19:27.274097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.954 [2024-11-19 13:19:27.274115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.954 [2024-11-19 13:19:27.274124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.954 [2024-11-19 13:19:27.274301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.954 [2024-11-19 13:19:27.274479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.954 [2024-11-19 13:19:27.274489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.954 [2024-11-19 13:19:27.274496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.954 [2024-11-19 13:19:27.274503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.954 [2024-11-19 13:19:27.286696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.954 [2024-11-19 13:19:27.287060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.954 [2024-11-19 13:19:27.287078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.954 [2024-11-19 13:19:27.287087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.954 [2024-11-19 13:19:27.287264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.954 [2024-11-19 13:19:27.287443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.954 [2024-11-19 13:19:27.287453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.954 [2024-11-19 13:19:27.287460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.954 [2024-11-19 13:19:27.287467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.954 [2024-11-19 13:19:27.299798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.954 [2024-11-19 13:19:27.300167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.954 [2024-11-19 13:19:27.300185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.954 [2024-11-19 13:19:27.300193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.954 [2024-11-19 13:19:27.300364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.954 [2024-11-19 13:19:27.300537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.954 [2024-11-19 13:19:27.300546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.954 [2024-11-19 13:19:27.300557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.954 [2024-11-19 13:19:27.300564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.954 [2024-11-19 13:19:27.312899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.954 [2024-11-19 13:19:27.313252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.954 [2024-11-19 13:19:27.313270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:23.954 [2024-11-19 13:19:27.313278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:23.954 [2024-11-19 13:19:27.313449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:23.954 [2024-11-19 13:19:27.313622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:23.954 [2024-11-19 13:19:27.313632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:23.954 [2024-11-19 13:19:27.313638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:23.954 [2024-11-19 13:19:27.313645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:23.954 [2024-11-19 13:19:27.326037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:23.954 [2024-11-19 13:19:27.326408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.216 [2024-11-19 13:19:27.326426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.216 [2024-11-19 13:19:27.326435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.216 [2024-11-19 13:19:27.326615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.216 [2024-11-19 13:19:27.326795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.216 [2024-11-19 13:19:27.326805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.216 [2024-11-19 13:19:27.326812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.216 [2024-11-19 13:19:27.326819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.216 [2024-11-19 13:19:27.339142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.216 [2024-11-19 13:19:27.339481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.216 [2024-11-19 13:19:27.339498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.216 [2024-11-19 13:19:27.339507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.216 [2024-11-19 13:19:27.339684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.216 [2024-11-19 13:19:27.339862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.216 [2024-11-19 13:19:27.339872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.216 [2024-11-19 13:19:27.339878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.216 [2024-11-19 13:19:27.339885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.216 [2024-11-19 13:19:27.352225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.216 [2024-11-19 13:19:27.352590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.216 [2024-11-19 13:19:27.352609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.216 [2024-11-19 13:19:27.352617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.216 [2024-11-19 13:19:27.352794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.216 [2024-11-19 13:19:27.352979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.216 [2024-11-19 13:19:27.352990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.216 [2024-11-19 13:19:27.352997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.216 [2024-11-19 13:19:27.353005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.216 [2024-11-19 13:19:27.365332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.216 [2024-11-19 13:19:27.365768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.216 [2024-11-19 13:19:27.365787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.216 [2024-11-19 13:19:27.365795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.216 [2024-11-19 13:19:27.365976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.216 [2024-11-19 13:19:27.366154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.216 [2024-11-19 13:19:27.366165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.216 [2024-11-19 13:19:27.366171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.216 [2024-11-19 13:19:27.366178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.216 [2024-11-19 13:19:27.378497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.216 [2024-11-19 13:19:27.378906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.216 [2024-11-19 13:19:27.378924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.216 [2024-11-19 13:19:27.378932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.216 [2024-11-19 13:19:27.379114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.216 [2024-11-19 13:19:27.379293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.216 [2024-11-19 13:19:27.379302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.216 [2024-11-19 13:19:27.379309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.216 [2024-11-19 13:19:27.379316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.216 [2024-11-19 13:19:27.391686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.216 [2024-11-19 13:19:27.392059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.216 [2024-11-19 13:19:27.392081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.216 [2024-11-19 13:19:27.392089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.216 [2024-11-19 13:19:27.392266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.216 [2024-11-19 13:19:27.392444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.216 [2024-11-19 13:19:27.392454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.216 [2024-11-19 13:19:27.392461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.216 [2024-11-19 13:19:27.392468] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.216 [2024-11-19 13:19:27.404828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.216 [2024-11-19 13:19:27.405260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.216 [2024-11-19 13:19:27.405278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.216 [2024-11-19 13:19:27.405287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.216 [2024-11-19 13:19:27.405464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.216 [2024-11-19 13:19:27.405641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.216 [2024-11-19 13:19:27.405652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.216 [2024-11-19 13:19:27.405659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.216 [2024-11-19 13:19:27.405665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.216 [2024-11-19 13:19:27.417994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.216 [2024-11-19 13:19:27.418423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.216 [2024-11-19 13:19:27.418441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.216 [2024-11-19 13:19:27.418449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.216 [2024-11-19 13:19:27.418626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.216 [2024-11-19 13:19:27.418803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.216 [2024-11-19 13:19:27.418814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.216 [2024-11-19 13:19:27.418821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.216 [2024-11-19 13:19:27.418827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.216 [2024-11-19 13:19:27.431144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.216 [2024-11-19 13:19:27.431571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.216 [2024-11-19 13:19:27.431588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.216 [2024-11-19 13:19:27.431597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.216 [2024-11-19 13:19:27.431777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.216 [2024-11-19 13:19:27.431961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.216 [2024-11-19 13:19:27.431971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.216 [2024-11-19 13:19:27.431978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.216 [2024-11-19 13:19:27.431986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.216 [2024-11-19 13:19:27.444310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2997768 Killed "${NVMF_APP[@]}" "$@" 00:27:24.216 [2024-11-19 13:19:27.444652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.216 [2024-11-19 13:19:27.444676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.217 [2024-11-19 13:19:27.444684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.217 [2024-11-19 13:19:27.444861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.217 [2024-11-19 13:19:27.445045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.217 [2024-11-19 13:19:27.445056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.217 [2024-11-19 13:19:27.445063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.217 [2024-11-19 13:19:27.445070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.217 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:24.217 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:24.217 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:24.217 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:24.217 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.217 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2999160 00:27:24.217 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2999160 00:27:24.217 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:24.217 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2999160 ']' 00:27:24.217 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.217 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:24.217 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.217 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:24.217 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.217 [2024-11-19 13:19:27.457464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.217 [2024-11-19 13:19:27.457875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.217 [2024-11-19 13:19:27.457892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.217 [2024-11-19 13:19:27.457902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.217 [2024-11-19 13:19:27.458084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.217 [2024-11-19 13:19:27.458262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.217 [2024-11-19 13:19:27.458273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.217 [2024-11-19 13:19:27.458280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.217 [2024-11-19 13:19:27.458288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.217 [2024-11-19 13:19:27.470622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.217 [2024-11-19 13:19:27.470993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.217 [2024-11-19 13:19:27.471012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.217 [2024-11-19 13:19:27.471020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.217 [2024-11-19 13:19:27.471198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.217 [2024-11-19 13:19:27.471376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.217 [2024-11-19 13:19:27.471386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.217 [2024-11-19 13:19:27.471393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.217 [2024-11-19 13:19:27.471402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.217 [2024-11-19 13:19:27.483734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.217 [2024-11-19 13:19:27.484145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.217 [2024-11-19 13:19:27.484164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.217 [2024-11-19 13:19:27.484172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.217 [2024-11-19 13:19:27.484349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.217 [2024-11-19 13:19:27.484527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.217 [2024-11-19 13:19:27.484537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.217 [2024-11-19 13:19:27.484544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.217 [2024-11-19 13:19:27.484551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.217 [2024-11-19 13:19:27.496789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.217 [2024-11-19 13:19:27.497135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.217 [2024-11-19 13:19:27.497153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.217 [2024-11-19 13:19:27.497161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.217 [2024-11-19 13:19:27.497333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.217 [2024-11-19 13:19:27.497505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.217 [2024-11-19 13:19:27.497522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.217 [2024-11-19 13:19:27.497529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.217 [2024-11-19 13:19:27.497535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.217 [2024-11-19 13:19:27.499833] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:27:24.217 [2024-11-19 13:19:27.499873] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.217 [2024-11-19 13:19:27.509934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.217 [2024-11-19 13:19:27.510383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.217 [2024-11-19 13:19:27.510403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.217 [2024-11-19 13:19:27.510411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.217 [2024-11-19 13:19:27.510589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.217 [2024-11-19 13:19:27.510767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.217 [2024-11-19 13:19:27.510777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.217 [2024-11-19 13:19:27.510784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.217 [2024-11-19 13:19:27.510791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.217 [2024-11-19 13:19:27.523056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.217 [2024-11-19 13:19:27.523390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.217 [2024-11-19 13:19:27.523409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.217 [2024-11-19 13:19:27.523417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.217 [2024-11-19 13:19:27.523595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.217 [2024-11-19 13:19:27.523772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.217 [2024-11-19 13:19:27.523781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.217 [2024-11-19 13:19:27.523789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.217 [2024-11-19 13:19:27.523796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.217 [2024-11-19 13:19:27.536242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.217 [2024-11-19 13:19:27.536655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.217 [2024-11-19 13:19:27.536674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.217 [2024-11-19 13:19:27.536683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.217 [2024-11-19 13:19:27.536860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.217 [2024-11-19 13:19:27.537045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.217 [2024-11-19 13:19:27.537056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.217 [2024-11-19 13:19:27.537063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.217 [2024-11-19 13:19:27.537071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.217 [2024-11-19 13:19:27.549424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.217 [2024-11-19 13:19:27.549856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.217 [2024-11-19 13:19:27.549874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.217 [2024-11-19 13:19:27.549883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.217 [2024-11-19 13:19:27.550065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.218 [2024-11-19 13:19:27.550242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.218 [2024-11-19 13:19:27.550252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.218 [2024-11-19 13:19:27.550259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.218 [2024-11-19 13:19:27.550266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.218 [2024-11-19 13:19:27.562473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.218 [2024-11-19 13:19:27.562832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.218 [2024-11-19 13:19:27.562850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.218 [2024-11-19 13:19:27.562859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.218 [2024-11-19 13:19:27.563041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.218 [2024-11-19 13:19:27.563229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.218 [2024-11-19 13:19:27.563238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.218 [2024-11-19 13:19:27.563245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.218 [2024-11-19 13:19:27.563252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.218 [2024-11-19 13:19:27.575616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.218 [2024-11-19 13:19:27.575964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.218 [2024-11-19 13:19:27.575981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.218 [2024-11-19 13:19:27.575989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.218 [2024-11-19 13:19:27.576161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.218 [2024-11-19 13:19:27.576332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.218 [2024-11-19 13:19:27.576343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.218 [2024-11-19 13:19:27.576354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.218 [2024-11-19 13:19:27.576362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.218 [2024-11-19 13:19:27.578942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:24.218 [2024-11-19 13:19:27.588778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.218 [2024-11-19 13:19:27.589080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.218 [2024-11-19 13:19:27.589101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.218 [2024-11-19 13:19:27.589110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.218 [2024-11-19 13:19:27.589289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.218 [2024-11-19 13:19:27.589468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.479 [2024-11-19 13:19:27.589479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.479 [2024-11-19 13:19:27.589487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.479 [2024-11-19 13:19:27.589494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.479 [2024-11-19 13:19:27.601877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.479 [2024-11-19 13:19:27.602315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.480 [2024-11-19 13:19:27.602333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.480 [2024-11-19 13:19:27.602342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.480 [2024-11-19 13:19:27.602519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.480 [2024-11-19 13:19:27.602697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.480 [2024-11-19 13:19:27.602707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.480 [2024-11-19 13:19:27.602715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.480 [2024-11-19 13:19:27.602722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.480 [2024-11-19 13:19:27.614865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.480 [2024-11-19 13:19:27.615192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.480 [2024-11-19 13:19:27.615210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.480 [2024-11-19 13:19:27.615218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.480 [2024-11-19 13:19:27.615391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.480 [2024-11-19 13:19:27.615563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.480 [2024-11-19 13:19:27.615572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.480 [2024-11-19 13:19:27.615580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.480 [2024-11-19 13:19:27.615587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.480 [2024-11-19 13:19:27.621558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:24.480 [2024-11-19 13:19:27.621584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:24.480 [2024-11-19 13:19:27.621591] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:24.480 [2024-11-19 13:19:27.621597] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:24.480 [2024-11-19 13:19:27.621603] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:24.480 [2024-11-19 13:19:27.623016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:24.480 [2024-11-19 13:19:27.623126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.480 [2024-11-19 13:19:27.623127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:24.480 [2024-11-19 13:19:27.628024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.480 [2024-11-19 13:19:27.628385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.480 [2024-11-19 13:19:27.628406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.480 [2024-11-19 13:19:27.628416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.480 [2024-11-19 13:19:27.628594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.480 [2024-11-19 13:19:27.628773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.480 [2024-11-19 13:19:27.628783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.480 [2024-11-19 13:19:27.628792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.480 [2024-11-19 13:19:27.628799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.480 [2024-11-19 13:19:27.641136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.480 [2024-11-19 13:19:27.641598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.480 [2024-11-19 13:19:27.641620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.480 [2024-11-19 13:19:27.641629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.480 [2024-11-19 13:19:27.641807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.480 [2024-11-19 13:19:27.641992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.480 [2024-11-19 13:19:27.642003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.480 [2024-11-19 13:19:27.642011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.480 [2024-11-19 13:19:27.642018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.480 [2024-11-19 13:19:27.654176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.480 [2024-11-19 13:19:27.654630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.480 [2024-11-19 13:19:27.654652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.480 [2024-11-19 13:19:27.654661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.480 [2024-11-19 13:19:27.654839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.480 [2024-11-19 13:19:27.655031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.480 [2024-11-19 13:19:27.655042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.480 [2024-11-19 13:19:27.655051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.480 [2024-11-19 13:19:27.655059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.480 [2024-11-19 13:19:27.667385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.480 [2024-11-19 13:19:27.667764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.480 [2024-11-19 13:19:27.667785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.480 [2024-11-19 13:19:27.667794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.480 [2024-11-19 13:19:27.667979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.480 [2024-11-19 13:19:27.668158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.480 [2024-11-19 13:19:27.668168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.480 [2024-11-19 13:19:27.668176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.480 [2024-11-19 13:19:27.668184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.480 [2024-11-19 13:19:27.680510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.480 [2024-11-19 13:19:27.680971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.480 [2024-11-19 13:19:27.680992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.480 [2024-11-19 13:19:27.681002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.480 [2024-11-19 13:19:27.681181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.480 [2024-11-19 13:19:27.681360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.480 [2024-11-19 13:19:27.681370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.480 [2024-11-19 13:19:27.681378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.480 [2024-11-19 13:19:27.681386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.480 [2024-11-19 13:19:27.693553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.480 [2024-11-19 13:19:27.693900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.480 [2024-11-19 13:19:27.693919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.480 [2024-11-19 13:19:27.693928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.480 [2024-11-19 13:19:27.694112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.480 [2024-11-19 13:19:27.694292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.480 [2024-11-19 13:19:27.694302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.480 [2024-11-19 13:19:27.694316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.480 [2024-11-19 13:19:27.694323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.480 [2024-11-19 13:19:27.706698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.480 [2024-11-19 13:19:27.707065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.480 [2024-11-19 13:19:27.707084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.480 [2024-11-19 13:19:27.707093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.480 [2024-11-19 13:19:27.707271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.480 [2024-11-19 13:19:27.707449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.480 [2024-11-19 13:19:27.707458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.480 [2024-11-19 13:19:27.707465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.480 [2024-11-19 13:19:27.707473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.480 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:24.480 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:24.480 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:24.481 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:24.481 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.481 [2024-11-19 13:19:27.719788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.481 [2024-11-19 13:19:27.720153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-11-19 13:19:27.720171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.481 [2024-11-19 13:19:27.720178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.481 [2024-11-19 13:19:27.720355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.481 [2024-11-19 13:19:27.720532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.481 [2024-11-19 13:19:27.720541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.481 [2024-11-19 13:19:27.720548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.481 [2024-11-19 13:19:27.720554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.481 [2024-11-19 13:19:27.732883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.481 [2024-11-19 13:19:27.733235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-11-19 13:19:27.733254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.481 [2024-11-19 13:19:27.733262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.481 [2024-11-19 13:19:27.733438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.481 [2024-11-19 13:19:27.733616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.481 [2024-11-19 13:19:27.733629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.481 [2024-11-19 13:19:27.733636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.481 [2024-11-19 13:19:27.733642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.481 [2024-11-19 13:19:27.745963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.481 [2024-11-19 13:19:27.746256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-11-19 13:19:27.746274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.481 [2024-11-19 13:19:27.746281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.481 [2024-11-19 13:19:27.746458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.481 [2024-11-19 13:19:27.746634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.481 [2024-11-19 13:19:27.746643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.481 [2024-11-19 13:19:27.746650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.481 [2024-11-19 13:19:27.746657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.481 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:24.481 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:24.481 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.481 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.481 [2024-11-19 13:19:27.758390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:24.481 [2024-11-19 13:19:27.759114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.481 [2024-11-19 13:19:27.759525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-11-19 13:19:27.759541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.481 [2024-11-19 13:19:27.759549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.481 [2024-11-19 13:19:27.759726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.481 [2024-11-19 13:19:27.759905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.481 [2024-11-19 13:19:27.759914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.481 [2024-11-19 13:19:27.759921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.481 [2024-11-19 13:19:27.759927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.481 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.481 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:24.481 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.481 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.481 [2024-11-19 13:19:27.772243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.481 [2024-11-19 13:19:27.772681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-11-19 13:19:27.772702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.481 [2024-11-19 13:19:27.772711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.481 [2024-11-19 13:19:27.772887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.481 [2024-11-19 13:19:27.773069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.481 [2024-11-19 13:19:27.773080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.481 [2024-11-19 13:19:27.773087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.481 [2024-11-19 13:19:27.773093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.481 [2024-11-19 13:19:27.785408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.481 [2024-11-19 13:19:27.785818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-11-19 13:19:27.785837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.481 [2024-11-19 13:19:27.785845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.481 [2024-11-19 13:19:27.786026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.481 [2024-11-19 13:19:27.786205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.481 [2024-11-19 13:19:27.786215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.481 [2024-11-19 13:19:27.786222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.481 [2024-11-19 13:19:27.786229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.481 [2024-11-19 13:19:27.798608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.481 [2024-11-19 13:19:27.799044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-11-19 13:19:27.799064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.481 [2024-11-19 13:19:27.799073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.481 [2024-11-19 13:19:27.799250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.481 [2024-11-19 13:19:27.799428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.481 [2024-11-19 13:19:27.799437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.481 [2024-11-19 13:19:27.799444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.481 [2024-11-19 13:19:27.799451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.481 Malloc0 00:27:24.481 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.481 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:24.481 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.481 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.481 [2024-11-19 13:19:27.811785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.481 [2024-11-19 13:19:27.812229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-11-19 13:19:27.812246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.481 [2024-11-19 13:19:27.812255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.481 [2024-11-19 13:19:27.812431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.481 [2024-11-19 13:19:27.812608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.481 [2024-11-19 13:19:27.812618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.481 [2024-11-19 13:19:27.812626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.481 [2024-11-19 13:19:27.812632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.481 4886.17 IOPS, 19.09 MiB/s [2024-11-19T12:19:27.858Z] 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.481 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:24.481 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.481 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.481 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.482 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:24.482 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.482 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.482 [2024-11-19 13:19:27.824910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.482 [2024-11-19 13:19:27.825343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-11-19 13:19:27.825362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2521500 with addr=10.0.0.2, port=4420 00:27:24.482 [2024-11-19 13:19:27.825371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2521500 is same with the state(6) to be set 00:27:24.482 [2024-11-19 13:19:27.825547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2521500 (9): Bad file descriptor 00:27:24.482 [2024-11-19 13:19:27.825724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:24.482 [2024-11-19 13:19:27.825733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:24.482 [2024-11-19 13:19:27.825740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:24.482 [2024-11-19 13:19:27.825747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:24.482 [2024-11-19 13:19:27.826992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:24.482 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.482 13:19:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2998061 00:27:24.482 [2024-11-19 13:19:27.838055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:24.741 [2024-11-19 13:19:27.863771] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:27:26.619 5669.29 IOPS, 22.15 MiB/s [2024-11-19T12:19:30.933Z] 6363.12 IOPS, 24.86 MiB/s [2024-11-19T12:19:31.867Z] 6882.67 IOPS, 26.89 MiB/s [2024-11-19T12:19:33.243Z] 7292.80 IOPS, 28.49 MiB/s [2024-11-19T12:19:33.861Z] 7634.27 IOPS, 29.82 MiB/s [2024-11-19T12:19:35.253Z] 7930.58 IOPS, 30.98 MiB/s [2024-11-19T12:19:36.191Z] 8159.92 IOPS, 31.87 MiB/s [2024-11-19T12:19:37.130Z] 8368.93 IOPS, 32.69 MiB/s [2024-11-19T12:19:37.130Z] 8553.00 IOPS, 33.41 MiB/s 00:27:33.753 Latency(us) 00:27:33.753 [2024-11-19T12:19:37.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.753 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:33.753 Verification LBA range: start 0x0 length 0x4000 00:27:33.753 Nvme1n1 : 15.01 8555.64 33.42 10828.21 0.00 6583.15 658.92 16868.40 00:27:33.753 [2024-11-19T12:19:37.130Z] =================================================================================================================== 00:27:33.753 [2024-11-19T12:19:37.130Z] Total : 8555.64 33.42 10828.21 0.00 6583.15 658.92 16868.40 00:27:33.753 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:33.753 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:33.753 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.753 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.753 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.753 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:33.753 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:33.753 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:33.753 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:33.753 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:33.753 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:33.753 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:33.753 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:33.753 rmmod nvme_tcp 00:27:33.753 rmmod nvme_fabrics 00:27:33.753 rmmod nvme_keyring 00:27:33.753 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:33.753 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:33.753 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:33.753 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2999160 ']' 00:27:33.753 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2999160 00:27:33.753 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2999160 ']' 00:27:33.753 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2999160 00:27:33.753 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:33.753 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:33.753 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2999160 00:27:34.013 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:34.013 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:34.013 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2999160' 00:27:34.013 killing process with pid 2999160 00:27:34.013 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2999160 00:27:34.013 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2999160 00:27:34.013 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:34.013 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:34.013 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:34.013 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:34.013 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:34.013 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:34.013 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:34.013 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:34.013 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:34.013 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.013 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:34.013 13:19:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:36.548 00:27:36.548 real 0m26.076s 00:27:36.548 user 1m0.614s 00:27:36.548 sys 0m6.866s 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:36.548 ************************************ 00:27:36.548 END TEST nvmf_bdevperf 00:27:36.548 ************************************ 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.548 ************************************ 00:27:36.548 START TEST nvmf_target_disconnect 00:27:36.548 ************************************ 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:36.548 * Looking for test storage... 00:27:36.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:36.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.548 --rc genhtml_branch_coverage=1 00:27:36.548 --rc genhtml_function_coverage=1 00:27:36.548 --rc genhtml_legend=1 00:27:36.548 --rc geninfo_all_blocks=1 00:27:36.548 --rc geninfo_unexecuted_blocks=1 00:27:36.548 00:27:36.548 ' 00:27:36.548 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:36.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.549 --rc genhtml_branch_coverage=1 00:27:36.549 --rc genhtml_function_coverage=1 00:27:36.549 --rc genhtml_legend=1 00:27:36.549 --rc geninfo_all_blocks=1 00:27:36.549 --rc geninfo_unexecuted_blocks=1 00:27:36.549 00:27:36.549 ' 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:36.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.549 --rc genhtml_branch_coverage=1 00:27:36.549 --rc genhtml_function_coverage=1 00:27:36.549 --rc genhtml_legend=1 00:27:36.549 --rc geninfo_all_blocks=1 00:27:36.549 --rc geninfo_unexecuted_blocks=1 00:27:36.549 00:27:36.549 ' 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:36.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.549 --rc genhtml_branch_coverage=1 00:27:36.549 --rc genhtml_function_coverage=1 00:27:36.549 --rc genhtml_legend=1 00:27:36.549 --rc geninfo_all_blocks=1 00:27:36.549 --rc geninfo_unexecuted_blocks=1 00:27:36.549 00:27:36.549 ' 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:36.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:36.549 13:19:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:43.121 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:43.121 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:43.121 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:43.121 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:43.121 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:43.122 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:43.122 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:43.122 Found net devices under 0000:86:00.0: cvl_0_0 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:43.122 Found net devices under 0000:86:00.1: cvl_0_1 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:43.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:43.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:27:43.122 00:27:43.122 --- 10.0.0.2 ping statistics --- 00:27:43.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.122 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:43.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:43.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:27:43.122 00:27:43.122 --- 10.0.0.1 ping statistics --- 00:27:43.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.122 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:43.122 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:43.123 ************************************ 00:27:43.123 START TEST nvmf_target_disconnect_tc1 00:27:43.123 ************************************ 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:43.123 [2024-11-19 13:19:45.742414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.123 [2024-11-19 13:19:45.742531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe6cab0 with addr=10.0.0.2, port=4420 00:27:43.123 [2024-11-19 13:19:45.742575] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:43.123 [2024-11-19 13:19:45.742600] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:43.123 [2024-11-19 13:19:45.742620] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:43.123 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:43.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:43.123 Initializing NVMe Controllers 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:43.123 00:27:43.123 real 0m0.121s 00:27:43.123 user 0m0.053s 00:27:43.123 sys 0m0.066s 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:43.123 ************************************ 00:27:43.123 END TEST nvmf_target_disconnect_tc1 00:27:43.123 ************************************ 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:43.123 ************************************ 00:27:43.123 START TEST nvmf_target_disconnect_tc2 00:27:43.123 ************************************ 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3004176 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3004176 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3004176 ']' 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:43.123 13:19:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.123 [2024-11-19 13:19:45.887068] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:27:43.123 [2024-11-19 13:19:45.887116] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:43.123 [2024-11-19 13:19:45.966288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:43.123 [2024-11-19 13:19:46.009374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:43.123 [2024-11-19 13:19:46.009410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:43.123 [2024-11-19 13:19:46.009418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:43.123 [2024-11-19 13:19:46.009424] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:43.123 [2024-11-19 13:19:46.009429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:43.123 [2024-11-19 13:19:46.011114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:43.123 [2024-11-19 13:19:46.011224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:43.123 [2024-11-19 13:19:46.011329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:43.123 [2024-11-19 13:19:46.011330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:43.123 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:43.123 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:43.123 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:43.123 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:43.123 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.123 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:43.123 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:43.123 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.123 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.123 Malloc0 00:27:43.123 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.123 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:43.123 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.123 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.123 [2024-11-19 13:19:46.190265] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:43.123 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.124 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:43.124 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.124 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.124 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.124 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:43.124 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.124 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.124 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.124 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:43.124 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.124 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.124 [2024-11-19 13:19:46.222504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:43.124 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.124 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:43.124 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.124 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.124 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.124 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3004375 00:27:43.124 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:43.124 13:19:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:45.039 13:19:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3004176 00:27:45.039 13:19:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:45.039 Read completed with error (sct=0, sc=8) 00:27:45.039 starting I/O failed 00:27:45.039 Read completed with error (sct=0, sc=8) 00:27:45.039 starting I/O failed 00:27:45.039 Read completed with error (sct=0, sc=8) 00:27:45.039 starting I/O failed 00:27:45.039 Read completed with error (sct=0, sc=8) 00:27:45.039 starting I/O failed 00:27:45.039 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 [2024-11-19 13:19:48.250503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 [2024-11-19 13:19:48.250712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 [2024-11-19 13:19:48.250905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Read completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.040 Write completed with error (sct=0, sc=8) 00:27:45.040 starting I/O failed 00:27:45.041 Write completed with error (sct=0, sc=8) 00:27:45.041 starting I/O failed 00:27:45.041 Write completed with error (sct=0, sc=8) 00:27:45.041 starting I/O failed 00:27:45.041 Write completed with error (sct=0, sc=8) 00:27:45.041 starting I/O failed 00:27:45.041 Write completed with error (sct=0, sc=8) 00:27:45.041 starting I/O failed 00:27:45.041 Write completed with error (sct=0, sc=8) 00:27:45.041 starting I/O failed 00:27:45.041 Write completed with error (sct=0, sc=8) 00:27:45.041 starting I/O failed 00:27:45.041 Write completed with error (sct=0, sc=8) 00:27:45.041 starting I/O failed 00:27:45.041 Read completed with error (sct=0, sc=8) 00:27:45.041 starting I/O failed 00:27:45.041 Write completed with error (sct=0, sc=8) 00:27:45.041 starting I/O failed 00:27:45.041 Write completed with error (sct=0, sc=8) 00:27:45.041 starting I/O failed 00:27:45.041 Write completed with error (sct=0, sc=8) 00:27:45.041 starting I/O failed 00:27:45.041 Write completed with error (sct=0, sc=8) 00:27:45.041 starting I/O failed 00:27:45.041 Read completed with error (sct=0, sc=8) 00:27:45.041 starting I/O failed 00:27:45.041 Read completed with error (sct=0, sc=8) 00:27:45.041 starting I/O failed 00:27:45.041 Read completed with error (sct=0, sc=8) 00:27:45.041 starting I/O failed 00:27:45.041 Read completed with error (sct=0, sc=8) 00:27:45.041 starting I/O failed 00:27:45.041 [2024-11-19 13:19:48.251106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.041 [2024-11-19 13:19:48.251325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.251349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.251513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.251524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.251750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.251783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.251905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.251936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.252089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.252122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.252240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.252271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.252536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.252568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.252753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.252797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.253030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.253042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.253200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.253212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.253356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.253388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.253527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.253568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.253827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.253874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.254003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.254015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.254177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.254210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.254344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.254377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.254565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.254598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.254912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.254965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.255149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.255181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.255356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.255389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.255527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.255560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.255766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.255799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.255994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.256028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.256295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.256328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.256564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.256596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.256867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.256900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.257133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.257166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.257356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.257388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.257519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.257531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.257673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.257685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.257879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.257890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.041 qpair failed and we were unable to recover it. 00:27:45.041 [2024-11-19 13:19:48.258020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.041 [2024-11-19 13:19:48.258031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.258229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.258262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.258507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.258539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.258794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.258826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.259085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.259119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.259355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.259388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.259678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.259710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.260012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.260048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.260193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.260226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.260415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.260448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.260540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.260551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.260762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.260794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.261100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.261134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.261276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.261309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.261582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.261626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.261770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.261782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.261939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.261981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.262175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.262208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.262382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.262414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.262694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.262705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.262832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.262847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.263005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.263018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.263111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.263122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.263282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.263315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.263489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.263522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.263852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.263884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.264079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.264113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.264303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.264336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.264517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.264550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.264729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.264761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.264999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.265033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.265180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.265212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.265380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.265413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.265654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.265687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.265899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.265932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.266143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.266177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.266366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.266398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.266610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.042 [2024-11-19 13:19:48.266626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.042 qpair failed and we were unable to recover it. 00:27:45.042 [2024-11-19 13:19:48.266849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.266865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.266968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.266982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.267123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.267139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.267301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.267316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.267466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.267481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.267639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.267655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.267890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.267923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.268189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.268221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.268399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.268431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.268719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.268752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.268940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.268984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.269168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.269201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.269446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.269479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.269690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.269705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.269922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.269963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.270097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.270129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.270331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.270363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.270551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.270583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.270767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.270799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.271009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.271043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.271148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.271180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.271384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.271416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.271725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.271763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.272013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.272046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.272234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.272266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.272454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.272491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.272738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.272753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.272915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.272931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.273168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.273183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.273355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.273386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.273566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.273599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.273791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.273823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.274083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.274117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.274319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.274352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.274572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.274605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.274823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.043 [2024-11-19 13:19:48.274843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.043 qpair failed and we were unable to recover it. 00:27:45.043 [2024-11-19 13:19:48.274930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.274967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.275082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.275102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.275287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.275319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.275508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.275542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.275801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.275834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.276069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.276089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.276329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.276349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.276509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.276528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.276687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.276707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.276902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.276935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.277185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.277219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.277485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.277531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.277786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.277806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.277962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.277983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.278155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.278188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.278314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.278347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.278610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.278643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.278827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.278846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.279042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.279063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.279246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.279279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.279534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.279567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.279807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.279839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.280116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.280149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.280347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.280380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.280584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.280615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.280869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.280902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.281129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.281168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.281417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.281450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.281619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.044 [2024-11-19 13:19:48.281639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.044 qpair failed and we were unable to recover it. 00:27:45.044 [2024-11-19 13:19:48.281730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.281749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.281981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.282001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.282108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.282127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.282281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.282328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.282534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.282566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.282815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.282858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.283012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.283032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.283180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.283199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.283414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.283447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.283560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.283592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.283728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.283761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.284000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.284027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.284271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.284303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.284489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.284522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.284710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.284736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.284930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.284972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.285231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.285264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.285550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.285582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.285831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.285856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.286101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.286127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.286301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.286327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.286578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.286604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.286805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.286830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.287082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.287109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.287362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.287388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.287546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.287571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.287828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.287853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.288021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.288048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.288297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.288331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.288534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.288567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.288765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.288798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.289000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.289028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.289186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.289218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.289477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.289511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.289764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.289797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.289979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.290005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.290162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.290187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.290431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.045 [2024-11-19 13:19:48.290461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.045 qpair failed and we were unable to recover it. 00:27:45.045 [2024-11-19 13:19:48.290575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.290599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.290710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.290735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.290987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.291022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.291314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.291346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.291557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.291589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.291723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.291755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.291926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.291984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.292233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.292259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.292370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.292394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.292653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.292678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.292902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.292928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.293181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.293206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.293379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.293405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.293576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.293602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.293827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.293860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.294040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.294073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.294310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.294342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.294583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.294615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.294795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.294827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.295090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.295125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.295349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.295381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.295557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.295589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.295853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.295886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.296201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.296235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.296421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.296453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.296710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.296744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.297036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.297070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.297278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.297311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.297617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.297650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.297789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.297821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.298014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.298047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.298185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.298217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.298497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.298530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.298794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.298827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.299017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.299051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.299265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.299298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.299540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.299572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.046 [2024-11-19 13:19:48.299778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.046 [2024-11-19 13:19:48.299811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.046 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.299985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.300019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.300202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.300240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.300495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.300528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.300811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.300844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.301128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.301162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.301335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.301367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.301633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.301666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.301865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.301898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.302185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.302217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.302400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.302432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.302696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.302729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.302928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.302971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.303151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.303184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.303392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.303425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.303662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.303694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.303972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.304006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.304126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.304160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.304398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.304430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.304612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.304644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.304892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.304925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.305174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.305206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.305493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.305525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.305789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.305822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.306004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.306038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.306246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.306278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.306548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.306581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.306841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.306873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.307128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.307162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.307412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.307446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.307749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.307781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.308037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.308071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.308359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.308392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.308613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.308645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.308853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.308886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.309077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.309111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.309367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.047 [2024-11-19 13:19:48.309400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.047 qpair failed and we were unable to recover it. 00:27:45.047 [2024-11-19 13:19:48.309685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.309718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.309996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.310031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.310335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.310367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.310627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.310660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.310933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.310974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.311164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.311202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.311468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.311500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.311785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.311818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.312067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.312101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.312407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.312439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.312617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.312650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.312838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.312871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.313059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.313093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.313290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.313322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.313528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.313561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.313815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.313849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.314046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.314079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.314339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.314372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.314562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.314594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.314856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.314889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.315188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.315222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.315480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.315512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.315754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.315785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.316099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.316133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.316406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.316439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.316713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.316745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.316962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.316996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.317132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.317165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.317350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.317383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.317630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.317663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.317899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.317931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.318247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.318280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.318569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.318601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.318895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.318927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.319194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.319228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.319339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.319369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.319644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.319677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.048 [2024-11-19 13:19:48.319943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.048 [2024-11-19 13:19:48.319986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.048 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.320268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.320301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.320492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.320525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.320803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.320835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.321023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.321057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.321307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.321339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.321626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.321659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.321792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.321824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.322088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.322128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.322320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.322353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.322606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.322639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.322828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.322861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.323063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.323097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.323270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.323302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.323540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.323573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.323761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.323793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.324033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.324067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.324266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.324298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.324567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.324599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.324862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.324895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.325170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.325204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.325463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.325495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.325623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.325656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.325898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.325930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.326211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.326244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.326516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.326549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.326665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.326698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.326969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.327003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.327194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.327227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.327481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.327514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.327704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.049 [2024-11-19 13:19:48.327736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.049 qpair failed and we were unable to recover it. 00:27:45.049 [2024-11-19 13:19:48.328005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.328040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.328267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.328299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.328538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.328571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.328825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.328858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.329038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.329072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.329312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.329345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.329633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.329666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.329917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.329958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.330255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.330288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.330430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.330464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.330731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.330763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.331045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.331080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.331350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.331383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.331519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.331551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.331791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.331825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.332097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.332133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.332332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.332364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.332547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.332580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.332771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.332804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.332984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.333017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.333217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.333249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.333511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.333544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.333791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.333823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.334052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.334087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.334262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.334295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.334500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.334532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.334799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.334832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.335076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.335110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.335350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.335383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.335616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.335649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.335825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.335857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.336130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.336165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.336363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.336396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.336637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.336669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.336841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.336874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.337117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.050 [2024-11-19 13:19:48.337151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.050 qpair failed and we were unable to recover it. 00:27:45.050 [2024-11-19 13:19:48.337331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.337363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.337561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.337594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.337814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.337847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.338090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.338124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.338320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.338353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.338542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.338576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.338844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.338876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.339082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.339116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.339307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.339345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.339606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.339639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.339904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.339938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.340237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.340270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.340464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.340497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.340687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.340719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.340904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.340935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.341190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.341223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.341397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.341430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.341639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.341671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.341934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.341979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.342264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.342297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.342491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.342523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.342789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.342821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.343109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.343144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.343417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.343450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.343734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.343768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.344015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.344049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.344228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.344261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.344444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.344477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.344657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.344690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.344961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.344995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.345265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.345298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.345575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.345608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.345860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.345893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.346098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.346133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.346400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.346432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.346724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.346758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.347030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.347064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.051 [2024-11-19 13:19:48.347350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.051 [2024-11-19 13:19:48.347383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.051 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.347653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.347685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.347925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.347968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.348263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.348296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.348533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.348565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.348783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.348816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.349088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.349123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.349420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.349452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.349573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.349606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.349875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.349908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.350122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.350156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.350448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.350487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.350770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.350803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.350998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.351033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.351249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.351283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.351427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.351461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.351728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.351761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.352045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.352079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.352324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.352358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.352541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.352573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.352816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.352849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.352980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.353015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.353285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.353319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.353453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.353486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.353676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.353709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.353968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.354002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.354199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.354232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.354500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.354533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.354810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.354843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.355125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.355159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.355414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.355448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.355745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.355778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.355994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.356029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.356152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.356184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.356380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.356413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.356682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.356716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.356998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.357032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.357283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.357317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.052 qpair failed and we were unable to recover it. 00:27:45.052 [2024-11-19 13:19:48.357590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.052 [2024-11-19 13:19:48.357624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.357824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.357857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.358038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.358073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.358265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.358298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.358551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.358584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.358865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.358898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.359206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.359240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.359417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.359449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.359643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.359676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.359867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.359901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.360184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.360218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.360412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.360445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.360696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.360730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.360962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.361008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.361279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.361312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.361520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.361553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.361801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.361834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.362046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.362080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.362290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.362324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.362543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.362576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.362768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.362802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.363052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.363086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.363271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.363304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.363577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.363610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.363824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.363857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.364103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.364138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.364397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.364430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.364682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.364715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.365010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.365045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.365310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.365343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.365634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.365683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.365942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.366001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.366187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.366221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.366470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.366503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.366709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.366743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.366945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.367001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.367195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.367228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.367497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.053 [2024-11-19 13:19:48.367530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.053 qpair failed and we were unable to recover it. 00:27:45.053 [2024-11-19 13:19:48.367776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.367809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.368079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.368113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.368321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.368354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.368645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.368679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.368955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.368990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.369277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.369311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.369526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.369560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.369805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.369838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.370124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.370158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.370366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.370399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.370650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.370684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.370980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.371015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.371279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.371312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.371443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.371477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.371745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.371778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.371996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.372037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.372317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.372352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.372622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.372657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.372856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.372889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.373193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.373227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.373489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.373523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.373744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.373777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.374042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.374077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.374283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.374317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.374585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.374619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.374907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.374941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.375244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.375276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.375540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.375573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.375840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.375873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.376189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.376225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.376504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.376537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.376722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.376755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.054 [2024-11-19 13:19:48.376957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.054 [2024-11-19 13:19:48.376993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.054 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.377242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.377275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.377468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.377501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.377773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.377807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.378054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.378089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.378297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.378330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.378630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.378663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.378931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.378973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.379155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.379188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.379454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.379487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.379771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.379806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.379965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.379999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.380275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.380308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.380609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.380643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.380904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.380937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.381132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.381166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.381344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.381378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.381626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.381659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.381938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.381983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.382256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.382290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.382563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.382597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.382800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.382834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.383083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.383119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.383416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.383455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.383732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.383766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.384047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.384081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.384289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.384323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.384542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.384576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.384826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.384859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.385128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.385163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.385369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.385402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.385581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.385614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.385886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.385919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.386211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.386245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.386517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.386551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.386834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.386868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.387125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.387162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.387467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.055 [2024-11-19 13:19:48.387502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.055 qpair failed and we were unable to recover it. 00:27:45.055 [2024-11-19 13:19:48.387755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.387788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.387975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.388011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.388216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.388250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.388431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.388464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.388750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.388796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.389002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.389038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.389321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.389354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.389655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.389689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.389812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.389846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.390129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.390164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.390422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.390456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.390728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.390762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.391056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.391092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.391298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.391332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.391511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.391545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.391817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.391851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.392126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.392161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.392360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.392393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.392669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.392703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.392921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.392964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.393221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.393255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.393396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.393430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.393708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.393741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.394042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.394077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.394224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.394257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.394485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.394525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.394714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.394748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.394959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.394995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.395198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.395231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.395416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.395449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.395730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.395763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.395999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.396034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.396303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.396338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.396535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.396569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.396832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.396866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.397146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.397181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.397473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.397507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.056 [2024-11-19 13:19:48.397784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.056 [2024-11-19 13:19:48.397818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.056 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.397964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.397999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.398190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.398225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.398478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.398512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.398713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.398747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.399020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.399055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.399326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.399360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.399656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.399691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.399969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.400006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.400283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.400317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.400593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.400627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.400833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.400868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.401014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.401049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.401327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.401360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.401501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.401535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.401844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.401879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.402160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.402195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.402414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.402448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.402628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.402661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.402928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.402972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.403161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.403195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.403395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.403428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.403658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.403692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.403896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.403929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.404220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.404254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.404454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.404487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.404680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.404714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.404981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.405017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.057 [2024-11-19 13:19:48.405282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.057 [2024-11-19 13:19:48.405320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.057 qpair failed and we were unable to recover it. 00:27:45.331 [2024-11-19 13:19:48.405599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.331 [2024-11-19 13:19:48.405635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.331 qpair failed and we were unable to recover it. 00:27:45.331 [2024-11-19 13:19:48.405843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.331 [2024-11-19 13:19:48.405879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.331 qpair failed and we were unable to recover it. 00:27:45.331 [2024-11-19 13:19:48.406076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.331 [2024-11-19 13:19:48.406133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.331 qpair failed and we were unable to recover it. 00:27:45.331 [2024-11-19 13:19:48.406423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.331 [2024-11-19 13:19:48.406457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.331 qpair failed and we were unable to recover it. 00:27:45.331 [2024-11-19 13:19:48.406679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.331 [2024-11-19 13:19:48.406713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.331 qpair failed and we were unable to recover it. 00:27:45.331 [2024-11-19 13:19:48.406897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.331 [2024-11-19 13:19:48.406930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.331 qpair failed and we were unable to recover it. 00:27:45.331 [2024-11-19 13:19:48.407143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.331 [2024-11-19 13:19:48.407178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.331 qpair failed and we were unable to recover it. 00:27:45.331 [2024-11-19 13:19:48.407431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.331 [2024-11-19 13:19:48.407465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.331 qpair failed and we were unable to recover it. 00:27:45.331 [2024-11-19 13:19:48.407718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.331 [2024-11-19 13:19:48.407752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.331 qpair failed and we were unable to recover it. 00:27:45.331 [2024-11-19 13:19:48.407962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.331 [2024-11-19 13:19:48.407997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.331 qpair failed and we were unable to recover it. 00:27:45.331 [2024-11-19 13:19:48.408350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.331 [2024-11-19 13:19:48.408384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.331 qpair failed and we were unable to recover it. 00:27:45.331 [2024-11-19 13:19:48.408654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.331 [2024-11-19 13:19:48.408687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.331 qpair failed and we were unable to recover it. 00:27:45.331 [2024-11-19 13:19:48.408907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.331 [2024-11-19 13:19:48.408941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.331 qpair failed and we were unable to recover it. 00:27:45.331 [2024-11-19 13:19:48.409237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.331 [2024-11-19 13:19:48.409271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.331 qpair failed and we were unable to recover it. 00:27:45.331 [2024-11-19 13:19:48.409540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.331 [2024-11-19 13:19:48.409573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.331 qpair failed and we were unable to recover it. 00:27:45.331 [2024-11-19 13:19:48.409788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.331 [2024-11-19 13:19:48.409822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.331 qpair failed and we were unable to recover it. 00:27:45.331 [2024-11-19 13:19:48.410027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.331 [2024-11-19 13:19:48.410062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.331 qpair failed and we were unable to recover it. 00:27:45.331 [2024-11-19 13:19:48.410266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.331 [2024-11-19 13:19:48.410299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.331 qpair failed and we were unable to recover it. 00:27:45.331 [2024-11-19 13:19:48.410524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.410557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.410743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.410777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.411035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.411070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.411324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.411358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.411661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.411693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.411964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.411999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.412256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.412289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.412559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.412592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.412795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.412831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.413037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.413072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.413322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.413355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.413469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.413501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.413750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.413785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.414067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.414103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.414293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.414327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.414522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.414555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.414832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.414865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.415069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.415104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.415356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.415389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.415691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.415726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.415993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.416029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.416309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.416357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.416625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.416659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.416854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.416887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.417179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.417213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.417354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.417388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.417523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.417557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.417761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.417795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.418085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.418121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.418318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.418353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.418608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.418641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.418936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.418980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.419180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.419213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.419438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.419471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.419746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.419780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.420010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.420045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.420249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.420282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.332 [2024-11-19 13:19:48.420584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.332 [2024-11-19 13:19:48.420617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.332 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.420879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.420914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.421137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.421172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.421425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.421459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.421681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.421715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.421985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.422022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.422311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.422345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.422488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.422523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.422820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.422853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.423144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.423180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.423306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.423339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.423612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.423705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.424062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.424102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.424308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.424345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.424626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.424661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.424916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.424962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.425206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.425241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.425391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.425425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.425622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.425656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.425856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.425889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.426107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.426143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.426422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.426457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.426660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.426695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.426829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.426864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.427075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.427120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.427410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.427445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.427709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.427744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.428034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.428070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.428264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.428299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.428489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.428524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.428808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.428842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.429095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.429130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.429432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.429466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.429658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.429693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.429876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.429910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.430191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.430226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.430506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.430540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.430826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.333 [2024-11-19 13:19:48.430861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.333 qpair failed and we were unable to recover it. 00:27:45.333 [2024-11-19 13:19:48.430989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.431025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.431302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.431338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.431562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.431595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.431867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.431901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.432116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.432153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.432403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.432437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.432548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.432583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.432806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.432841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.433037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.433072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.433276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.433310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.433529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.433563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.433818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.433853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.434062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.434096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.434263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.434297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.434581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.434614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.434873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.434907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.435227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.435262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.435543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.435577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.435865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.435899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.436117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.436153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.436355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.436390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.436641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.436676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.436858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.436892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.437162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.437197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.437319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.437353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.437618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.437653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.437928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.437978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.438252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.438288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.438557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.438591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.438772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.438806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.439060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.439096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.439296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.439331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.439604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.439638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.439920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.439967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.440170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.440203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.440388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.440422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.440720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.440753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.334 [2024-11-19 13:19:48.441009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.334 [2024-11-19 13:19:48.441044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.334 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.441324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.441359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.441573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.441608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.441866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.441901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.442184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.442219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.442501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.442536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.442816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.442850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.443123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.443158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.443429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.443462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.443691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.443724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.444002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.444038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.444320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.444354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.444633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.444666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.444958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.444994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.445269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.445302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.445531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.445566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.445758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.445793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.446047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.446081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.446306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.446342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.446594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.446626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.446885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.446920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.447229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.447266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.447457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.447491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.447767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.447800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.448064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.448100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.448314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.448348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.448626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.448660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.448886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.448920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.449209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.449244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.449520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.449561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.449755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.449789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.449909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.449942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.450154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.450190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.450463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.450497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.450789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.450828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.451026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.451059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.451315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.451349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.451626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.451660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.335 qpair failed and we were unable to recover it. 00:27:45.335 [2024-11-19 13:19:48.451928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.335 [2024-11-19 13:19:48.451976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.452206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.452240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.452440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.452475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.452760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.452794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.453020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.453055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.453328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.453362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.453564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.453599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.453878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.453912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.454133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.454168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.454444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.454479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.454733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.454767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.454986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.455024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.455291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.455325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.455550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.455586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.455780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.455814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.455944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.455988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.456243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.456278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.456408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.456442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.456745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.456780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.457065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.457101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.457246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.457281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.457468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.457504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.457730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.457765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.457968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.458004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.458207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.458241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.458440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.458473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.458736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.458771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.459077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.459112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.459368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.459401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.459592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.459626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.459818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.459852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.336 qpair failed and we were unable to recover it. 00:27:45.336 [2024-11-19 13:19:48.460041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.336 [2024-11-19 13:19:48.460083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.460302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.460337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.460608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.460642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.460895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.460929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.461144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.461179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.461393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.461428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.461630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.461663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.461940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.461989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.462134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.462168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.462365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.462399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.462665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.462700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.462983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.463020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.463250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.463285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.463555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.463588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.463869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.463905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.464182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.464216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.464415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.464449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.464630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.464664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.464956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.464991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.465264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.465298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.465499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.465532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.465735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.465769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.465971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.466006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.466239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.466274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.466483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.466518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.466726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.466761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.467015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.467051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.467263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.467298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.467505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.467540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.467742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.467775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.468100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.468135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.468261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.468297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.468579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.468613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.468808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.468842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.469121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.469156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.469352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.469386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.469525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.469559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.469740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.337 [2024-11-19 13:19:48.469776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.337 qpair failed and we were unable to recover it. 00:27:45.337 [2024-11-19 13:19:48.469988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.470023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.470225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.470260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.470446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.470480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.470740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.470775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.471077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.471112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.471303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.471338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.471492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.471526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.471710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.471744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.472008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.472044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.472327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.472360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.472662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.472696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.472957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.472993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.473120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.473153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.473408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.473441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.473744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.473779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.474026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.474061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.474183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.474218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.474470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.474503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.474758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.474793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.474985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.475021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.475155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.475190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.475392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.475425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.475649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.475691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.475833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.475867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.475994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.476029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.476235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.476270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.476452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.476486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.476689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.476724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.476911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.476945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.477143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.477183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.477458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.477491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.477763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.477797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.478068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.478104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.478290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.478324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.478509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.478543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.478769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.478804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.478997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.479032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.338 [2024-11-19 13:19:48.479339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.338 [2024-11-19 13:19:48.479373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.338 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.479680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.479714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.479908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.479943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.480141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.480174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.480451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.480485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.480751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.480786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.480931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.480975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.481179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.481213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.481469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.481503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.481699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.481733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.482007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.482042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.482325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.482360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.482496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.482531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.482830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.482866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.483143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.483178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.483387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.483421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.483697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.483732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.484016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.484051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.484239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.484274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.484542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.484575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.484753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.484787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.485070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.485105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.485369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.485405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.485641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.485675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.485965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.486000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.486209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.486245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.486443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.486478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.486760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.486794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.487076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.487111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.487310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.487345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.487623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.487657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.487872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.487907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.488111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.488157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.488303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.488338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.488540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.488574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.488758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.488793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.488941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.488987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.489240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.339 [2024-11-19 13:19:48.489273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.339 qpair failed and we were unable to recover it. 00:27:45.339 [2024-11-19 13:19:48.489396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.489430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.489624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.489659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.489912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.489957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.490148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.490182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.490384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.490419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.490533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.490567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.490847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.490882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.491181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.491218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.491434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.491468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.491601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.491635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.491891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.491924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.492061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.492097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.492304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.492338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.492551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.492586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.492770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.492805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.493008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.493043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.493248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.493282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.493543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.493579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.493754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.493787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.494012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.494048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.494328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.494362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.494502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.494536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.494787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.494820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.494996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.495032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.495255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.495291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.495548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.495582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.495792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.495826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.496031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.496066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.496320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.496354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.496550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.496585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.496863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.496897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.497130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.497166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.497350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.340 [2024-11-19 13:19:48.497383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.340 qpair failed and we were unable to recover it. 00:27:45.340 [2024-11-19 13:19:48.497580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.497614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.497820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.497860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.497984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.498019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.498213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.498246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.498442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.498475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.498695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.498729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.498923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.498969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.499159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.499193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.499396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.499430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.499552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.499585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.499729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.499762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.499937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.499983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.500256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.500290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.500483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.500516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.500712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.500744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.501025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.501061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.501263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.501297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.501496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.501530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.501813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.501846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.502056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.502092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.502299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.502333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.502480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.502513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.502637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.502671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.502862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.502896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.503100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.503136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.503254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.503289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.503561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.503595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.503784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.503817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.504018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.504056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.504201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.504234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.504442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.504475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.504738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.504774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.504972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.505008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.505267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.505303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.505484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.505520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.505769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.505803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.341 [2024-11-19 13:19:48.506035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.341 [2024-11-19 13:19:48.506070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.341 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.506225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.506261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.506510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.506543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.506685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.506720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.506911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.506958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.507161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.507200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.507396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.507430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.507621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.507655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.507840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.507875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.508015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.508051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.508187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.508229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.508431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.508464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.508678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.508712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.508847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.508880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.509080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.509116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.509309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.509343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.509543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.509578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.509777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.509812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.510140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.510218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.510557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.510596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.510848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.510882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.511143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.511178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.511429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.511464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.511678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.511712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.511838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.511873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.512123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.512159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.512289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.512321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.512555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.512590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.512723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.512756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.512973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.513008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.513268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.513301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.513485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.513519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.513710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.513752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.513944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.513994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.514209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.514245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.514376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.514412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.514536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.514569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.514818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.514852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.342 [2024-11-19 13:19:48.514993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.342 [2024-11-19 13:19:48.515029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.342 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.515157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.515191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.515408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.515441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.515553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.515587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.515730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.515764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.515956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.515991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.516118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.516153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.516404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.516436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.516589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.516622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.516848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.516881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.517082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.517118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.517350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.517384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.517512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.517545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.517677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.517712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.517981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.518015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.518164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.518197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.518379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.518415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.518680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.518713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.518915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.518965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.519216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.519251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.519535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.519570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.519788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.519829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.519961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.519997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.520289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.520323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.520575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.520609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.520801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.520834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.521041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.521077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.521294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.521327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.521506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.521540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.521733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.521768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.522045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.522079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.522280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.522315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.522593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.522627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.522850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.522884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.523128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.523162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.523372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.523406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.523663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.523697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.523996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.524031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.343 [2024-11-19 13:19:48.524294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.343 [2024-11-19 13:19:48.524328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.343 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.524522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.524556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.524828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.524861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.525059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.525094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.525352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.525386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.525661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.525694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.525955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.525989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.526262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.526298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.526499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.526534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.526747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.526780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.526991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.527033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.527257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.527290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.527515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.527547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.527794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.527827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.528039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.528073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.528346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.528379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.528610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.528643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.528839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.528874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.529068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.529104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.529380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.529414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.529704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.529740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.529978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.530014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.530264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.530298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.530595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.530629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.530840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.530874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.531032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.531071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.531325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.531357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.531561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.531595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.531794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.531831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.532024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.532058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.532331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.532364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.532527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.532562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.532780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.532813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.533069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.533103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.533325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.533360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.533554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.533588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.533784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.533819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.534072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.534108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.534302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.344 [2024-11-19 13:19:48.534336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.344 qpair failed and we were unable to recover it. 00:27:45.344 [2024-11-19 13:19:48.534496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.534530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.534741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.534775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.534972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.535008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.535216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.535249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.535535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.535569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.535771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.535805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.536004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.536040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.536245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.536281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.536542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.536576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.536701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.536737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.536967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.537003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.537130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.537165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.537384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.537419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.537670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.537704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.537961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.537998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.538195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.538230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.538428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.538461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.538644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.538677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.538898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.538933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.539150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.539185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.539370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.539405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.539668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.539703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.539979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.540016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.540273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.540306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.540545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.540579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.540775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.540808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.541005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.541042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.541175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.541209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.541405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.541441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.541588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.541622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.541852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.541886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.542114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.542151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.542299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.345 [2024-11-19 13:19:48.542334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.345 qpair failed and we were unable to recover it. 00:27:45.345 [2024-11-19 13:19:48.542519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.542553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.542809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.542843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.543039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.543075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.543353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.543388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.543597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.543632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.543903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.543938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.544154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.544195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.544403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.544439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.544621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.544655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.544943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.544993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.545254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.545287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.545483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.545518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.545815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.545849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.546126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.546162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.546295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.546330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.546466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.546501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.546751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.546786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.547037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.547074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.547206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.547239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.547442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.547478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.547776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.547811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.548092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.548128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.548405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.548441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.548578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.548612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.548802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.548839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.548971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.549008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.549285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.549322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.549436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.549471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.549668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.549703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.549929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.549974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.550159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.550195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.550456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.550491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.550678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.550713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.550916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.550986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.551208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.551242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.551423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.551458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.551669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.551704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.551965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.552000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.346 qpair failed and we were unable to recover it. 00:27:45.346 [2024-11-19 13:19:48.552254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.346 [2024-11-19 13:19:48.552290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.552589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.552624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.552830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.552866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.553143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.553179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.553387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.553420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.553680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.553714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.553897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.553932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.554168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.554202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.554478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.554512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.554772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.554808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.555156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.555192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.555407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.555442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.555700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.555734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.555883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.555917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.556067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.556104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.556302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.556337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.556619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.556653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.556876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.556911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.557111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.557147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.557424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.557457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.557642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.557677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.557879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.557914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.558138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18daaf0 is same with the state(6) to be set 00:27:45.347 [2024-11-19 13:19:48.558564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.558662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.558971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.559013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.559297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.559332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.559519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.559554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.559749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.559783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.560060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.560098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.560378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.560418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.560717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.560759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.560971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.561008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.561217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.561253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.561480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.561516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.561654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.561691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.561824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.561861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.562061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.562105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.562383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.562420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.562606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.347 [2024-11-19 13:19:48.562640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.347 qpair failed and we were unable to recover it. 00:27:45.347 [2024-11-19 13:19:48.562847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.562883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.563103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.563146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.563343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.563378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.563574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.563609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.563806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.563843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.564058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.564096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.564295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.564335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.564528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.564566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.564849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.564884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.565093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.565128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.565335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.565375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.565572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.565605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.565913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.565958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.566096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.566131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.566323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.566358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.566551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.566588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.566882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.566916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.567183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.567223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.567432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.567469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.567662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.567697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.568009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.568044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.568322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.568358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.568612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.568647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.568961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.568998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.569153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.569189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.569468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.569506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.569785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.569820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.569973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.570009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.570262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.570296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.570558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.570594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.570847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.570882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.571076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.571110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.571316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.571350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.571476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.571511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.571710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.571746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.572024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.572059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.572206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.572242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.348 [2024-11-19 13:19:48.572534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.348 [2024-11-19 13:19:48.572576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.348 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.572798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.572833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.573099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.573136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.573331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.573368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.573620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.573656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.573841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.573876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.574134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.574170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.574474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.574510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.574719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.574756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.574967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.575003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.575202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.575236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.575360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.575394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.575545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.575579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.575763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.575798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.576013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.576049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.576240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.576275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.576553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.576589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.576834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.576869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.577132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.577169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.577455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.577490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.577633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.577667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.577945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.577988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.578123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.578159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.578422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.578457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.578646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.578681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.578870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.578904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.579043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.579078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.579346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.579424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.579566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.579607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.579798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.579832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.580039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.580075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.580262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.580297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.580571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.580603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.580813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.580848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.349 [2024-11-19 13:19:48.581080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.349 [2024-11-19 13:19:48.581116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.349 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.581299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.581332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.581584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.581617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.581873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.581908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.582107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.582141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.582391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.582427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.582677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.582721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.582966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.583003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.583256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.583290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.583558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.583593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.583801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.583835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.584048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.584083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.584232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.584266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.584544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.584580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.584857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.584893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.585094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.585129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.585307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.585341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.585528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.585563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.585849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.585883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.586097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.586132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.586416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.586450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.586735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.586772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.586910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.586944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.587146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.587182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.587452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.587488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.587769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.587804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.588004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.588040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.588342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.588376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.588591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.588627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.588809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.588843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.589049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.589083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.589339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.589374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.589624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.589660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.350 qpair failed and we were unable to recover it. 00:27:45.350 [2024-11-19 13:19:48.589802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.350 [2024-11-19 13:19:48.589836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.590052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.590088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.590311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.590345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.590597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.590630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.590755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.590789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.590919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.590960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.591240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.591276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.591427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.591462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.591718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.591755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.591944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.592000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.592199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.592234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.592359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.592392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.592597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.592632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.592884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.592924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.593124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.593158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.593352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.593386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.593503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.593539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.593796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.593830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.594084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.594118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.594318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.594351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.594633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.594668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.594962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.594998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.595255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.595290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.595481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.595515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.595791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.595825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.596009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.596045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.596200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.596233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.596424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.596459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.596686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.596721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.596922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.596966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.597163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.597199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.597411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.597444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.597643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.597677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.597886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.597920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.598125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.598163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.598466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.598499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.598802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.598836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.351 [2024-11-19 13:19:48.599118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.351 [2024-11-19 13:19:48.599153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.351 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.599431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.599466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.599688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.599722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.600009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.600046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.600177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.600210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.600461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.600496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.600705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.600738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.601009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.601045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.601227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.601263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.601391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.601425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.601539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.601571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.601772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.601808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.602029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.602065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.602282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.602315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.602609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.602643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.602864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.602899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.603193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.603233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.603434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.603470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.603728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.603762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.603977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.604016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.604296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.604332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.604515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.604549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.604800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.604836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.605087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.605125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.605358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.605391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.605592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.605625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.605842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.605878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.606041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.606075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.606230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.606263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.606492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.606527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.606785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.606819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.607025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.607063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.607314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.607349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.607478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.607511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.607766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.607802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.608047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.608081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.608360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.608393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.608656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.608690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.352 qpair failed and we were unable to recover it. 00:27:45.352 [2024-11-19 13:19:48.608971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.352 [2024-11-19 13:19:48.609006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.609241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.609276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.609578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.609611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.609894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.609928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.610134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.610168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.610400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.610435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.610688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.610720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.610983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.611017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.611224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.611257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.611446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.611482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.611734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.611770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.612073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.612109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.612392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.612426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.612549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.612583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.612837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.612870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.613168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.613204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.615238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.615302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.615598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.615637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.615898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.615942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.616164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.616200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.616326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.616359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.616637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.616671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.616933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.616982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.617197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.617233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.617486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.617521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.617716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.617750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.618004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.618039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.618293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.618328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.618533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.618567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.618711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.618746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.619032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.619067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.619218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.619251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.619454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.619490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.619748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.619783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.619989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.620025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.620164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.620198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.620486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.620520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.620747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.353 [2024-11-19 13:19:48.620781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.353 qpair failed and we were unable to recover it. 00:27:45.353 [2024-11-19 13:19:48.621082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.621119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.621352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.621386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.621671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.621704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.621824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.621858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.622060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.622096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.622350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.622384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.622568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.622603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.622871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.622905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.623194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.623229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.623372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.623407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.623609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.623643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.623841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.623874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.624003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.624038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.624162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.624198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.624447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.624481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.624693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.624726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.624853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.624887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.625097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.625130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.625394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.625429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.625683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.625718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.625921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.625974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.626106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.626140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.626289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.626324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.626544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.626579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.626865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.626898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.627195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.627230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.627448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.627484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.627716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.627753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.628020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.628058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.628339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.628375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.628521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.628556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.628837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.354 [2024-11-19 13:19:48.628871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.354 qpair failed and we were unable to recover it. 00:27:45.354 [2024-11-19 13:19:48.629023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.629058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.629190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.629225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.629352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.629385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.629584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.629618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.629901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.629934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.630196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.630231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.630471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.630504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.630756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.630790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.630992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.631029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.631299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.631333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.631488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.631522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.631776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.631811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.632094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.632133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.632428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.632465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.632687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.632721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.632989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.633025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.633317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.633352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.633478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.633512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.633790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.633824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.634072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.634109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.634290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.634325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.634621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.634655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.634887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.634921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.635250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.635286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.635503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.635538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.635753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.635787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.635995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.636033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.636236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.636271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.636459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.636494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.636634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.636668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.636858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.636893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.637109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.637143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.637348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.637383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.637586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.637620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.637746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.637781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.637968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.638007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.638120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.638155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.638415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.355 [2024-11-19 13:19:48.638453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.355 qpair failed and we were unable to recover it. 00:27:45.355 [2024-11-19 13:19:48.638660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.638694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.638812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.638846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.638971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.639006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.639148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.639184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.639505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.639540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.639699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.639733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.640029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.640065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.640204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.640240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.640396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.640429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.640697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.640734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.641003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.641038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.641182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.641218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.641341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.641375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.641619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.641654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.641920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.641965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.642154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.642189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.642390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.642425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.642695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.642738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.642944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.642990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.643151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.643187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.643322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.643357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.643642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.643675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.643863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.643897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.644112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.644146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.644302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.644339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.644614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.644649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.644897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.644933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.645265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.645299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.645432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.645468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.645738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.645771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.645917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.645965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.646154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.646189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.647726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.647785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.651977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.652045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.652279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.652317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.652531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.652565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.652775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.652811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.356 [2024-11-19 13:19:48.653017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.356 [2024-11-19 13:19:48.653055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.356 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.653266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.653301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.653557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.653596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.653838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.653878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.654090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.654127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.654332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.654367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.654504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.654540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.654743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.654780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.654915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.654964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.655187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.655223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.655383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.655415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.655642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.655678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.655907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.655945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.656127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.656162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.656310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.656346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.656495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.656530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.656785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.656823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.657019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.657055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.657260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.657295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.657452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.657487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.657800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.657845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.658052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.658090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.658230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.658266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.658455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.658491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.658719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.658754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.658961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.658997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.659209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.659246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.659478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.659515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.659716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.659761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.659875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.659899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.661565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.661612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.661888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.661914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.662167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.662195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.662339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.662363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.662669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.662703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.662924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.662972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.663188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.663223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.663348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.357 [2024-11-19 13:19:48.663381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.357 qpair failed and we were unable to recover it. 00:27:45.357 [2024-11-19 13:19:48.663671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.663705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.663972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.664010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.664215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.664249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.664483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.664518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.664626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.664650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.664881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.664903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.665159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.665183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.665361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.665386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.665610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.665635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.665770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.665790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.666070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.666094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.666328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.666364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.666520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.666555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.666758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.666792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.667024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.667059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.667325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.667361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.667489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.667514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.667632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.667654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.667860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.667887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.668053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.668077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.668313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.668349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.668558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.668593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.668788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.668829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.669106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.669145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.669364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.669399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.669605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.669639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.669776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.669811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.670086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.670121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.670319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.670353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.670588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.670623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.670887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.670921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.671163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.671199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.671453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.671488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.671793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.671829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.672024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.672061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.672253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.672287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.674396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.674462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.674725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.358 [2024-11-19 13:19:48.674761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.358 qpair failed and we were unable to recover it. 00:27:45.358 [2024-11-19 13:19:48.674984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.675020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.675225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.675261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.675403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.675438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.675645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.675680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.675837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.675872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.676125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.676162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.676354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.676390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.676536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.676570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.676833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.676866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.678317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.678375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.678627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.678666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.678888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.678924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.679098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.679136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.679313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.679348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.679647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.679680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.679878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.679911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.680113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.680145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.680340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.680374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.680514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.680547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.680675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.680707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.680998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.681030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.681160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.681190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.681318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.681350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.681485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.681518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.681826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.681864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.682064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.682097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.682234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.682269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.682518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.682552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.682746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.682777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.683031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.683064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.683261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.683291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.683615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.683647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.683918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.359 [2024-11-19 13:19:48.683980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.359 qpair failed and we were unable to recover it. 00:27:45.359 [2024-11-19 13:19:48.684177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.684209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.684405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.684438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.684766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.684799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.685004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.685037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.685263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.685295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.685451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.685482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.685615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.685647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.685865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.685898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.686099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.686131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.686328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.686359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.686519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.686552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.686725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.686757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.686961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.686997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.687293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.687326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.687518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.687550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.687679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.687711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.687842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.687873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.688078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.688110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.688240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.688273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.688400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.688434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.688682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.688715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.688972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.689004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.689220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.689253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.689385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.689416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.689677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.689708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.691514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.691579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.691780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.691816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.692030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.692069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.360 [2024-11-19 13:19:48.692285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.360 [2024-11-19 13:19:48.692319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.360 qpair failed and we were unable to recover it. 00:27:45.639 [2024-11-19 13:19:48.692518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.639 [2024-11-19 13:19:48.692555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.639 qpair failed and we were unable to recover it. 00:27:45.639 [2024-11-19 13:19:48.692794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.639 [2024-11-19 13:19:48.692830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.639 qpair failed and we were unable to recover it. 00:27:45.639 [2024-11-19 13:19:48.693115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.639 [2024-11-19 13:19:48.693161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.639 qpair failed and we were unable to recover it. 00:27:45.639 [2024-11-19 13:19:48.693313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.639 [2024-11-19 13:19:48.693346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.639 qpair failed and we were unable to recover it. 00:27:45.639 [2024-11-19 13:19:48.693565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.639 [2024-11-19 13:19:48.693600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.639 qpair failed and we were unable to recover it. 00:27:45.639 [2024-11-19 13:19:48.693811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.639 [2024-11-19 13:19:48.693847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.639 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.693973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.694008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.694150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.694185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.694392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.694426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.694545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.694579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.694784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.694820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.695078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.695113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.695298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.695333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.695530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.695564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.695746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.695783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.695918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.695966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.696216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.696251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.696395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.696431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.696641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.696675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.696983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.697019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.697220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.697256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.697471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.697504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.697702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.697736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.697930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.697979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.698116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.698151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.698354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.698388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.698538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.698571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.698828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.698863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.699008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.699046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.699238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.699274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.699591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.699626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.699908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.699943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.700221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.700257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.700397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.700431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.700652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.700686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.700806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.700839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.701153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.701189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.701316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.701352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.701489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.701522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.701779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.701815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.702073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.702110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.640 qpair failed and we were unable to recover it. 00:27:45.640 [2024-11-19 13:19:48.702296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.640 [2024-11-19 13:19:48.702329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.702537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.702577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.702856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.702892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.703122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.703155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.703297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.703330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.703463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.703500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.703687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.703720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.703871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.703905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.704208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.704244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.704408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.704442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.706075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.706136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.706463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.706499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.706655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.706689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.708527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.708591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.708903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.708940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.709182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.709217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.709427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.709461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.709742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.709777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.709986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.710022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.710284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.710318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.710481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.710515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.710717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.710751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.710964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.711001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.711214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.711246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.712637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.712690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.712923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.712968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.713195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.713226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.713358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.713390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.713610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.713642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.713893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.713924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.714212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.714244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.714524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.714555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.714819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.714849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.715103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.715136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.716018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.716072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.716293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.716329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.716533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.716570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.716774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.641 [2024-11-19 13:19:48.716812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.641 qpair failed and we were unable to recover it. 00:27:45.641 [2024-11-19 13:19:48.717015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.717050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.717246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.717277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.717690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.717725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.718008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.718051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.718297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.718329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.718464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.718495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.718775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.718806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.719068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.719101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.719350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.719381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.719579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.719610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.719884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.719916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.720121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.720153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.720442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.720473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.720744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.720775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.721075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.721104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.721242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.721272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.721454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.721484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.721728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.721758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.722003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.722032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.722223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.722252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.722363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.722392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.722632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.722660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.722843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.722872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.723002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.723032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.723304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.723333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.723452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.723481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.723650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.723679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.723861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.723890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.724317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.724350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.724574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.724605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.724853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.724883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.725003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.725033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.725274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.725304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.725486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.725515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.725696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.725725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.725895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.725924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.726055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.726084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.642 qpair failed and we were unable to recover it. 00:27:45.642 [2024-11-19 13:19:48.726290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.642 [2024-11-19 13:19:48.726319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.726440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.726469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.726639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.726667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.726836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.726866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.727061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.727092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.727275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.727303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.727513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.727547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.727725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.727754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.727939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.727979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.728173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.728202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.728371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.728401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.728592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.728621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.728809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.728838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.729044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.729074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.729251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.729281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.729404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.729433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.729562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.729591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.729792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.729820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.729935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.729974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.730154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.730183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.730450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.730479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.730743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.730773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.730883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.730912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.731101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.731136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.731325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.731358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.731489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.731522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.731740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.731775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.731889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.731922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.732078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.732113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.732391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.732424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.732696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.732730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.732864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.732897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.733107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.733142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.733366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.733401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.733586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.733619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.733761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.733795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.734071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.734107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.643 [2024-11-19 13:19:48.734316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.643 [2024-11-19 13:19:48.734350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.643 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.734508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.734541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.734730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.734764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.734895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.734929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.735216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.735250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.735447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.735482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.735698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.735732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.735862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.735895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.736208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.736242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.736427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.736467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.736682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.736716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.736859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.736893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.737062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.737098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.737214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.737248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.737439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.737472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.737648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.737683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.737869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.737902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.738062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.738097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.738302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.738335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.738602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.738635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.738863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.738896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.739089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.739123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.739323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.739357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.739503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.739538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.739720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.739754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.740003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.740039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.740169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.740203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.740391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.740424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.740694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.740728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.740977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.741013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.741126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.741160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.741302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.644 [2024-11-19 13:19:48.741337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.644 qpair failed and we were unable to recover it. 00:27:45.644 [2024-11-19 13:19:48.741535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.741570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.741821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.741855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.742050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.742085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.742206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.742239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.742431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.742513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.742725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.742763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.742977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.743014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.743203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.743236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.743452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.743485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.743677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.743710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.743915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.743963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.744147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.744181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.744311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.744344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.744559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.744591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.744767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.744802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.745073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.745107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.745358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.745391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.745522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.745555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.745691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.745724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.745923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.745965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.746149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.746183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.746370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.746403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.746525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.746558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.746747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.746780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.747029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.747063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.747337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.747371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.747595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.747629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.747769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.747802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.748080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.748115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.748313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.748346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.748618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.748651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.748877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.748916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.749061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.749095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.749207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.749239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.749447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.749480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.749599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.749631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.749842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.749876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.750058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.645 [2024-11-19 13:19:48.750093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.645 qpair failed and we were unable to recover it. 00:27:45.645 [2024-11-19 13:19:48.750280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.750313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.750494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.750528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.750663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.750697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.750921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.750964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.751227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.751260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.751461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.751497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.751745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.751778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.752035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.752070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.752195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.752231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.752373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.752408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.752531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.752565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.752747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.752781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.752981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.753016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.753206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.753239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.753513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.753547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.753734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.753768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.753992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.754027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.754170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.754205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.754402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.754436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.754565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.754600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.754867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.754906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.755076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.755111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.755302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.755336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.755525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.755557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.755703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.755737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.755911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.755943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.756202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.756237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.756480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.756516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.756645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.756678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.756804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.756837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.757092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.757126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.757263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.757297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.757493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.757527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.757800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.757834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.757973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.758011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.758224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.758258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.758396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.758430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.646 [2024-11-19 13:19:48.758627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.646 [2024-11-19 13:19:48.758660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.646 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.758777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.758811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.758990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.759027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.759242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.759275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.759472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.759505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.759629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.759665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.759862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.759896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.760035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.760070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.760267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.760300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.760478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.760512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.760689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.760727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.760918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.760961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.761255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.761289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.761428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.761461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.761728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.761764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.761888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.761921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.762132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.762167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.762282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.762315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.762587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.762620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.762900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.762933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.763068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.763102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.763279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.763312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.763508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.763541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.763768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.763801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.764008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.764043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.764294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.764328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.764525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.764559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.764749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.764782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.764901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.764934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.765268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.765301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.765572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.765606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.765809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.765843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.765966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.766000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.766191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.766226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.766397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.766432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.766627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.766661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.766798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.766831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.767070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.767104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.767296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.647 [2024-11-19 13:19:48.767329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.647 qpair failed and we were unable to recover it. 00:27:45.647 [2024-11-19 13:19:48.767453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.767486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.767622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.767655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.767773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.767806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.767924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.767968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.768106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.768139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.768245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.768279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.768396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.768428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.768621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.768655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.768874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.768909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.769170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.769205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.769413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.769446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.769625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.769658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.769910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.770001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.770246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.770285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.770410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.770446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.770561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.770596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.770774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.770810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.771090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.771126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.771321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.771356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.771552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.771587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.771770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.771806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.771931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.771989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.772196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.772232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.772422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.772456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.772588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.772621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.772843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.772888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.773127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.773161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.773341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.773375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.773496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.773529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.773803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.773838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.774052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.774087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.774272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.774311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.774455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.774490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.774632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.774669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.774848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.774883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.775003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.775039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.775171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.775203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.648 [2024-11-19 13:19:48.775330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.648 [2024-11-19 13:19:48.775366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.648 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.775574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.775612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.775885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.775921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.776130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.776164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.776354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.776387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.776576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.776610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.776867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.776904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.777058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.777099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.777360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.777394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.777584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.777618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.777798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.777832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.777969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.778008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.778123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.778156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.778441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.778477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.778665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.778698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.778818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.778858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.779071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.779107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.779334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.779371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.779510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.779545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.779680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.779715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.779946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.779995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.780134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.780168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.780350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.780384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.780564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.780597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.780787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.780828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.780944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.780987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.781232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.781266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.781476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.781509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.781790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.781823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.782057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.782095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.782378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.782412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.782674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.782710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.782857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.649 [2024-11-19 13:19:48.782891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.649 qpair failed and we were unable to recover it. 00:27:45.649 [2024-11-19 13:19:48.783142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.783178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.783297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.783332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.783512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.783544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.783656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.783687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.783811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.783846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.783967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.784001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.784197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.784231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.784512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.784544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.784685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.784719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.784895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.784935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.785189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.785224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.785371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.785405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.785652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.785684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.785800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.785834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.786057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.786093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.786235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.786268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.786462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.786495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.786725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.786759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.786940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.786982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.787179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.787212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.787394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.787428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.787602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.787635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.787852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.787884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.788096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.788130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.788256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.788289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.788476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.788509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.788705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.788738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.788866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.788898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.789089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.789123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.789322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.789356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.789483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.789517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.789706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.789739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.789925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.789970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.790163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.790197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.790419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.790452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.790574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.790608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.790788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.790874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.791105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.650 [2024-11-19 13:19:48.791144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.650 qpair failed and we were unable to recover it. 00:27:45.650 [2024-11-19 13:19:48.791321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.791354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.791543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.791578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.791754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.791787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.791981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.792017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.792224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.792259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.792442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.792476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.792658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.792691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.792913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.792959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.793182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.793215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.793325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.793361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.793495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.793530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.793669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.793702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.793851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.793886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.794104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.794137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.794329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.794362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.794488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.794523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.794656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.794690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.794819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.794852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.794984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.795020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.795251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.795286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.795460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.795493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.795634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.795666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.795886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.795919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.796130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.796164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.796300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.796341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.796563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.796600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.796790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.796822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.797008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.797043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.797245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.797278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.797467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.797500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.797693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.797725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.797858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.797891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.798085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.798118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.798298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.798330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.798576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.798610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.798794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.798828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.798965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.799000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.799261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.651 [2024-11-19 13:19:48.799294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.651 qpair failed and we were unable to recover it. 00:27:45.651 [2024-11-19 13:19:48.799469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.799502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.799704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.799738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.799854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.799887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.800014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.800049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.800292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.800325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.800597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.800630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.800754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.800786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.800922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.800967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.801094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.801128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.801268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.801301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.801567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.801600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.801720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.801753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.801881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.801913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.802052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.802085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.802289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.802329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.802466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.802499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.802610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.802644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.802760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.802795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.802913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.802946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.803086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.803119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.803317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.803351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.803613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.803646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.803785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.803818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.804016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.804050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.804245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.804279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.804538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.804570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.804769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.804805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.804940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.805002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.805117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.805150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.805326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.805358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.805569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.805605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.805855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.805889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.806147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.806181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.806384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.806417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.806539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.806573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.806842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.806875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.807005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.807040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.652 qpair failed and we were unable to recover it. 00:27:45.652 [2024-11-19 13:19:48.807222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.652 [2024-11-19 13:19:48.807257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.807474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.807507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.807760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.807794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.807934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.807978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.808103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.808137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.808343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.808377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.808560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.808593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.808710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.808745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.808931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.808975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.809168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.809202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.809376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.809410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.809581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.809615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.809805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.809839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.809972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.810006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.810132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.810165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.810429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.810468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.810663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.810698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.810973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.811021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.811149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.811184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.811370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.811403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.811523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.811557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.811796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.811833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.812050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.812084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.812271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.812304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.812427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.812460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.812704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.812738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.812928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.812975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.813105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.813138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.813314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.813346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.813458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.813491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.813620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.813654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.813779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.813813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.814076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.653 [2024-11-19 13:19:48.814111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.653 qpair failed and we were unable to recover it. 00:27:45.653 [2024-11-19 13:19:48.814288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.814322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.814453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.814485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.814675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.814711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.814888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.814922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.815141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.815174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.815280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.815313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.815486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.815519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.815758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.815792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.815977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.816011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.816278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.816312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.816419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.816452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.816667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.816700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.816875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.816908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.817104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.817137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.817345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.817378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.817500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.817533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.817710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.817743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.817927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.817973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.818102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.818135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.818260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.818292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.818430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.818464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.818671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.818704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.818828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.818861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.819069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.819103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.819284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.819325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.819523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.819555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.819673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.819706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.819911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.819944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.820136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.820169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.820339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.820372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.820549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.820582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.820780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.820812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.821015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.821050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.821234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.821268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.821396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.821429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.821557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.821590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.821717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.821750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.654 [2024-11-19 13:19:48.822013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.654 [2024-11-19 13:19:48.822048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.654 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.822186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.822221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.822393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.822427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.822551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.822584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.822758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.822791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.822989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.823023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.823141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.823173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.823346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.823379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.823568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.823601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.823840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.823873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.824119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.824153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.824404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.824436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.824683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.824717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.824890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.824923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.825056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.825091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.825279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.825311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.825419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.825452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.825568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.825602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.825736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.825768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.825959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.825994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.826182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.826214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.826390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.826422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.826553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.826585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.826853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.826886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.827010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.827043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.827237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.827270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.827443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.827475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.827719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.827758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.827945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.827989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.828099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.828132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.828305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.828338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.828519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.828550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.828788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.828821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.829034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.829067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.829258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.829291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.829462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.829495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.829759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.829792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.829998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.655 [2024-11-19 13:19:48.830033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-11-19 13:19:48.830216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.830249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.830372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.830403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.830609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.830642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.830820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.830852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.830986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.831019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.831280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.831312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.831431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.831465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.831586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.831619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.831798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.831830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.832013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.832047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.832239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.832272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.832380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.832412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.832667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.832700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.832961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.832994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.833125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.833157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.833421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.833454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.833632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.833665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.833915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.833958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.834155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.834188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.834374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.834406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.834659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.834692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.834824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.834857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.835063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.835096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.835277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.835309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.835444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.835477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.835600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.835633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.835863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.835970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.836200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.836238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.836419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.836452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.836653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.836701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.836963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.836998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.837180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.837212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.837386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.837418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.837628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.837661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.837844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.837876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.838112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.838146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.838275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.838307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-11-19 13:19:48.838542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.656 [2024-11-19 13:19:48.838574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.838783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.838816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.839084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.839118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.839233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.839266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.839445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.839478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.839764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.839796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.839930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.839974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.840173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.840206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.840395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.840429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.840563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.840596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.840779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.840812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.841000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.841035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.841246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.841278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.841514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.841548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.841724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.841756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.841969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.842003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.842187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.842219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.842335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.842367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.842502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.842534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.842713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.842751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.842990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.843024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.843230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.843262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.843475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.843508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.843705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.843737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.843874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.843906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.844132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.844166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.844338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.844370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.844630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.844662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.844968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.845002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.845184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.845217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.845457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.845489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.845680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.845713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.845896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.845929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.846207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.846240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.846344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.846377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.846499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.846531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.846710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.846743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.846940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.846984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.657 qpair failed and we were unable to recover it. 00:27:45.657 [2024-11-19 13:19:48.847121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.657 [2024-11-19 13:19:48.847155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.847273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.847306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.847492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.847524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.847705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.847738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.847972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.848007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.848188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.848221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.848440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.848473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.848599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.848632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.848818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.848850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.849065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.849100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.849334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.849367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.849484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.849516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.849689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.849722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.849904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.849937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.850195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.850228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.850415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.850448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.850628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.850661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.850784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.850816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.851076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.851110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.851375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.851408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.851532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.851565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.851679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.851711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.851850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.851883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.852055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.852089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.852282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.852314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.852527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.852560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.852763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.852796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.852923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.852975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.853149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.853182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.853354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.853386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.853652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.853684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.853816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.853848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.854022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.854056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.658 [2024-11-19 13:19:48.854168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.658 [2024-11-19 13:19:48.854200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.658 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.854407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.854439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.854556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.854588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.854763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.854795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.854965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.854998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.855198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.855230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.855438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.855471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.855711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.855743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.855928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.855972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.856217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.856250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.856488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.856520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.856759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.856792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.856994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.857028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.857208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.857241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.857421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.857453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.857642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.857674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.857799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.857837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.858020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.858053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.858258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.858291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.858477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.858509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.858643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.858675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.858777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.858810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.858933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.858976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.859094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.859126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.859391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.859423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.859600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.859632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.859757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.859790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.859915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.859957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.860134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.860167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.860345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.860377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.860558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.860592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.860766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.860798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.860990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.861025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.861160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.861193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.861313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.861346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.861470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.861502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.861740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.861772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.861961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.861996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.659 [2024-11-19 13:19:48.862126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.659 [2024-11-19 13:19:48.862159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.659 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.862365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.862398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.862515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.862548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.862740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.862773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.862956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.862989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.863095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.863134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.863394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.863427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.863638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.863670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.863855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.863887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.864104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.864137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.864269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.864301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.864443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.864475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.864650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.864681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.864811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.864844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.864999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.865034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.865217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.865249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.865381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.865413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.865633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.865665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.865770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.865802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.865927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.865966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.866098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.866130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.866305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.866336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.866511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.866543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.866735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.866768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.866946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.866987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.867233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.867265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.867473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.867505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.867740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.867774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.867880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.867912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.868053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.868085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.868272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.868305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.868423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.868456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.868632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.868670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.868799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.868832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.868980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.869014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.869129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.869162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.869290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.869323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.869432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.869464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.869653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.660 [2024-11-19 13:19:48.869686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.660 qpair failed and we were unable to recover it. 00:27:45.660 [2024-11-19 13:19:48.869859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.869891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.870073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.870105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.870215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.870248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.870361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.870393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.870503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.870536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.870754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.870786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.870981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.871016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.871195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.871229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.871413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.871445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.871562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.871594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.871782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.871814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.871926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.871980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.872160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.872193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.872317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.872350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.872466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.872499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.872759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.872792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.872966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.873000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.873189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.873221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.873395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.873427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.873557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.873590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.873751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.873824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.874038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.874078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.874291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.874325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.874495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.874528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.874640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.874672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.874848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.874881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.875142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.875176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.875294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.875327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.875527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.875560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.875680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.875713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.875906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.875939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.876217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.876251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.876516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.876549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.876759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.876791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.877022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.877056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.877292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.877325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.877562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.877596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.877838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.661 [2024-11-19 13:19:48.877871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.661 qpair failed and we were unable to recover it. 00:27:45.661 [2024-11-19 13:19:48.878059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.878093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.878289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.878323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.878495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.878527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.878792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.878825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.879029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.879063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.879241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.879273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.879401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.879434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.879620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.879653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.879842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.879875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.880052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.880092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.880224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.880257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.880367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.880400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.880570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.880603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.880790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.880821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.881007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.881041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.881218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.881250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.881495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.881528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.881764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.881797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.881982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.882016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.882214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.882246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.882438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.882471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.882679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.882713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.882918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.882960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.883245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.883278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.883395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.883428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.883611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.883644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.883883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.883916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.884188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.884221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.884389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.884422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.884659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.884693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.884814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.884846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.885015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.885049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.885289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.885323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.885566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.885598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.885715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.885747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.662 qpair failed and we were unable to recover it. 00:27:45.662 [2024-11-19 13:19:48.885875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.662 [2024-11-19 13:19:48.885908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.886052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.886087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.886281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.886313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.886514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.886547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.886683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.886717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.886888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.886921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.887198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.887232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.887441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.887473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.887644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.887676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.887860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.887894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.888168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.888202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.888442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.888475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.888595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.888626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.888806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.888839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.889119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.889159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.889283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.889316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.889506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.889539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.889809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.889842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.890025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.890058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.890181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.890215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.890387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.890421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.890710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.890743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.890914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.890956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.891077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.891111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.891353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.891386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.891515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.891547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.891668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.891701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.891905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.891936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.892171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.892204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.892322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.892355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.892566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.892598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.892767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.892800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.892922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.892966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.893145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.893178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.893283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.893315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.663 [2024-11-19 13:19:48.893500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.663 [2024-11-19 13:19:48.893533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.663 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.893716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.893750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.893924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.893966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.894150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.894184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.894365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.894397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.894600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.894634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.894845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.894878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.895163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.895197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.895372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.895404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.895578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.895611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.895727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.895760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.895939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.895985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.896247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.896280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.896412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.896445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.896566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.896599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.896836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.896869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.897061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.897094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.897333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.897365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.897560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.897593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.897728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.897774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.897968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.898002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.898130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.898163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.898345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.898378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.898512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.898545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.898732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.898765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.899002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.899037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.899255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.899288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.899427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.899459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.899635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.899668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.899878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.899911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.900035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.900068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.900253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.900286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.900461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.900495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.900625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.900658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.900840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.900874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.901054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.901087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.901275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.901307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.664 qpair failed and we were unable to recover it. 00:27:45.664 [2024-11-19 13:19:48.901420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.664 [2024-11-19 13:19:48.901453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.901581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.901614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.901729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.901762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.901884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.901916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.902123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.902156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.902334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.902367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.902545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.902577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.902729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.902762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.902877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.902910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.903167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.903238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.903459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.903497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.903617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.903650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.903767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.903799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.903981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.904016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.904142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.904175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.904442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.904475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.904597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.904629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.904732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.904765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.904897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.904929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.905046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.905079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.905247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.905280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.905475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.905507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.905695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.905738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.905916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.905958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.906159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.906191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.906315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.906346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.906469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.906502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.906621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.906653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.906890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.906922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.907054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.907086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.907276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.907309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.907500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.907532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.907652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.907684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.907890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.907922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.908126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.908159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.908349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.908381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.908503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.908536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.665 [2024-11-19 13:19:48.908662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.665 [2024-11-19 13:19:48.908695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.665 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.908864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.908895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.909009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.909041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.909157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.909189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.909307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.909339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.909447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.909480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.909652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.909684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.909809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.909842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.910008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.910042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.910226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.910258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.910363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.910395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.910520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.910552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.910671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.910705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.910810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.910842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.911048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.911082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.911192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.911225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.911341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.911373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.911525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.911556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.911744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.911776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.911909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.911941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.912213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.912245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.912444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.912476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.912739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.912771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.912942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.912985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.913107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.913138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.913319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.913357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.913469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.913501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.913693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.913725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.913848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.913880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.914016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.914049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.914222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.914254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.914427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.914460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.914625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.914658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.914774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.914806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.914987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.915020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.915224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.915257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.915372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.915404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.915592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.915625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.666 qpair failed and we were unable to recover it. 00:27:45.666 [2024-11-19 13:19:48.915746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.666 [2024-11-19 13:19:48.915778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.915911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.915943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.916130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.916162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.916287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.916319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.916508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.916540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.916717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.916750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.916945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.916989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.917171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.917203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.917329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.917361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.917532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.917564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.917686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.917719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.917975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.918009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.918184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.918217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.918349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.918381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.918652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.918686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.918794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.918827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.918940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.918990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.919204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.919237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.919417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.919449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.919565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.919598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.919784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.919816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.919932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.919974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.920158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.920190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.920311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.920344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.920475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.920507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.920693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.920725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.920911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.920942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.921075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.921112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.921231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.921262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.921391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.921422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.921595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.921627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.921815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.921847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.667 qpair failed and we were unable to recover it. 00:27:45.667 [2024-11-19 13:19:48.922084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.667 [2024-11-19 13:19:48.922117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.922298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.922330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.922587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.922620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.922815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.922847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.923017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.923050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.923238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.923270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.923400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.923432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.923653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.923685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.923804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.923835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.924023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.924057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.924320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.924351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.924525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.924558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.924747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.924779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.924889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.924921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.925108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.925141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.925381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.925413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.925588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.925620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.925802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.925835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.926016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.926049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.926219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.926251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.926463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.926495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.926666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.926697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.926822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.926855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.927036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.927069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.927277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.927309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.927413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.927445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.927571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.927603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.927791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.927823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.927928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.927968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.928155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.928187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.928307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.928339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.928558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.928590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.928715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.928747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.928852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.928884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.929140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.929174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.929350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.929387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.929624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.929656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.929830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.668 [2024-11-19 13:19:48.929863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.668 qpair failed and we were unable to recover it. 00:27:45.668 [2024-11-19 13:19:48.930098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.930131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.930239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.930271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.930469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.930501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.930673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.930705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.930809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.930841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.931020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.931054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.931172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.931204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.931395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.931427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.931693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.931725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.931934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.931983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.932153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.932185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.932312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.932345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.932580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.932612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.932726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.932757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.932874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.932907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.933100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.933132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.933344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.933377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.933613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.933646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.933762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.933794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.933911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.933943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.934142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.934175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.934286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.934317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.934440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.934473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.934592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.934624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.934799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.934871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.935034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.935073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.935345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.935379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.935502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.935535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.935794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.935827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.936016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.936050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.936230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.936264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.936391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.936423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.936638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.936670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.936842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.936874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.937050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.937084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.937194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.937227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.937453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.937485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.669 [2024-11-19 13:19:48.937682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.669 [2024-11-19 13:19:48.937714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.669 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.937910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.937944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.938191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.938223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.938470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.938501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.938760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.938792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.938922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.938965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.939093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.939126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.939312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.939346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.939532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.939564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.939811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.939844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.939971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.940006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.940182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.940215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.940480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.940511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.940696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.940729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.940910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.940959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.941087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.941120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.941359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.941391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.941570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.941602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.941786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.941819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.942056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.942089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.942197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.942228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.942414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.942446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.942622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.942654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.942910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.942943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.943159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.943192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.943394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.943426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.943608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.943639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.943762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.943796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.943958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.943993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.944162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.944195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.944333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.944365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.944546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.944578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.944760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.944793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.944973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.945006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.945269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.945302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.945548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.945580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.945692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.945725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.945843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.945875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.670 [2024-11-19 13:19:48.946003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.670 [2024-11-19 13:19:48.946037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.670 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.946152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.946185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.946357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.946390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.946513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.946550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.946724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.946755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.946966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.947002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.947182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.947214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.947383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.947415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.947617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.947650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.947830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.947861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.947977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.948010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.948123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.948155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.948280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.948313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.948437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.948469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.948656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.948689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.948879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.948911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.949030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.949063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.949258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.949291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.949473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.949506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.949677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.949708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.949887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.949920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.950199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.950237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.950413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.950446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.950614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.950646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.950769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.950800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.950983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.951017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.951278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.951310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.951503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.951535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.951656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.951688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.951817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.951849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.951965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.952005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.952267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.952300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.952425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.952456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.952660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.952692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.952932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.952974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.953171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.953202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.953332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.953365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.953491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.953523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.671 qpair failed and we were unable to recover it. 00:27:45.671 [2024-11-19 13:19:48.953781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.671 [2024-11-19 13:19:48.953813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.953992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.954026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.954213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.954245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.954364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.954396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.954582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.954614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.954784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.954817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.955061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.955095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.955266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.955299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.955432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.955464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.955636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.955667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.955848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.955880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.955998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.956032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.956160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.956192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.956307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.956339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.956453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.956485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.956617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.956649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.956822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.956855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.957030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.957064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.957264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.957296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.957480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.957517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.957627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.957659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.957914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.957955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.958084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.958117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.958378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.958411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.958647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.958680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.958876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.958909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.959123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.959157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.959350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.959383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.959593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.959626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.959806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.959839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.960041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.960074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.960196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.960228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.672 [2024-11-19 13:19:48.960398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.672 [2024-11-19 13:19:48.960431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.672 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.960545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.960578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.960788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.960820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.960939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.960982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.961153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.961185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.961313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.961346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.961580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.961612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.961796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.961829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.961956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.961990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.962241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.962272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.962454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.962487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.962658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.962691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.962880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.962913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.963180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.963214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.963335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.963373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.963559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.963591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.963781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.963813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.964000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.964033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.964206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.964238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.964354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.964386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.964561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.964593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.964771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.964803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.964995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.965028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.965166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.965198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.965377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.965409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.965589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.965621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.965831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.965863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.966097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.966130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.966340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.966373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.966618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.966650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.966903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.966935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.967098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.967129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.967310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.967342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.967522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.967554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.967672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.967704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.967938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.967984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.968167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.968199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.968440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.673 [2024-11-19 13:19:48.968472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.673 qpair failed and we were unable to recover it. 00:27:45.673 [2024-11-19 13:19:48.968596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.968628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.968817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.968849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.968969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.969003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.969128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.969161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.969342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.969373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.969556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.969588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.969800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.969833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.969963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.969998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.970200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.970233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.970427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.970460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.970725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.970757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.970965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.970999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.971194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.971227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.971359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.971390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.971649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.971681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.971804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.971841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.972043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.972080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.972196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.972228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.972421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.972454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.972695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.972727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.972917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.972970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.973099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.973130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.973416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.973448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.973570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.973602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.973918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.974006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.974211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.974247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.974377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.974411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.974523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.974556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.974743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.974776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.974890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.974922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.975201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.975234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.975352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.975384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.975515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.975547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.975749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.975782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.976021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.976055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.976236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.976270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.976387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.674 [2024-11-19 13:19:48.976420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.674 qpair failed and we were unable to recover it. 00:27:45.674 [2024-11-19 13:19:48.977740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.977794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.977997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.978033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.978274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.978308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.978489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.978521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.978645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.978678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.978795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.978828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.978959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.978999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.979186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.979219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.979399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.979433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.979597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.979629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.979813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.979847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.979962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.979996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.980136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.980169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.980422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.980455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.980564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.980596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.980721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.980754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.980900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.980933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.981121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.981154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.981289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.981321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.981505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.981537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.981725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.981758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.981877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.981909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.982089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.982124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.982234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.982266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.982398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.982431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.982553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.982587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.982755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.982788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.983035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.983069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.983242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.983275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.983396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.983429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.983549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.983583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.983796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.983829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.983967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.984002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.984171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.984209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.984325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.984359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.984476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.984510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.984622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.984656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.984842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.675 [2024-11-19 13:19:48.984874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.675 qpair failed and we were unable to recover it. 00:27:45.675 [2024-11-19 13:19:48.985087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.985122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.985328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.985362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.985546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.985578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.985766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.985800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.985908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.985942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.986062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.986095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.986197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.986229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.986344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.986377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.986481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.986513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.986632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.986666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.986859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.986892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.987077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.987112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.987240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.987273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.987443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.987476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.987606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.987639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.987818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.987851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.987970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.988003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.988106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.988139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.988310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.988343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.988473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.988506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.988755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.988789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.988911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.988944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.989071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.989110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.989223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.989256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.989430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.989463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.989723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.989756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.989871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.989903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.990042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.990075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.990250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.990282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.990472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.990506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.990623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.990657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.990787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.990819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.990928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.991003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.991142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.676 [2024-11-19 13:19:48.991173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.676 qpair failed and we were unable to recover it. 00:27:45.676 [2024-11-19 13:19:48.991286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-11-19 13:19:48.991317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.677 qpair failed and we were unable to recover it. 00:27:45.677 [2024-11-19 13:19:48.991500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-11-19 13:19:48.991532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.677 qpair failed and we were unable to recover it. 00:27:45.677 [2024-11-19 13:19:48.991678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-11-19 13:19:48.991711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.677 qpair failed and we were unable to recover it. 00:27:45.677 [2024-11-19 13:19:48.991850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-11-19 13:19:48.991883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.677 qpair failed and we were unable to recover it. 00:27:45.677 [2024-11-19 13:19:48.992060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-11-19 13:19:48.992093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.677 qpair failed and we were unable to recover it. 00:27:45.677 [2024-11-19 13:19:48.992218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-11-19 13:19:48.992252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.677 qpair failed and we were unable to recover it. 00:27:45.677 [2024-11-19 13:19:48.992373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-11-19 13:19:48.992405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.677 qpair failed and we were unable to recover it. 00:27:45.677 [2024-11-19 13:19:48.992523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-11-19 13:19:48.992556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.677 qpair failed and we were unable to recover it. 00:27:45.677 [2024-11-19 13:19:48.992667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-11-19 13:19:48.992698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.677 qpair failed and we were unable to recover it. 00:27:45.677 [2024-11-19 13:19:48.992804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-11-19 13:19:48.992837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.677 qpair failed and we were unable to recover it. 00:27:45.677 [2024-11-19 13:19:48.992945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-11-19 13:19:48.992986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.677 qpair failed and we were unable to recover it. 00:27:45.677 [2024-11-19 13:19:48.993173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-11-19 13:19:48.993206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.677 qpair failed and we were unable to recover it. 00:27:45.677 [2024-11-19 13:19:48.993320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-11-19 13:19:48.993352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.677 qpair failed and we were unable to recover it. 00:27:45.677 [2024-11-19 13:19:48.993537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-11-19 13:19:48.993570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.677 qpair failed and we were unable to recover it. 00:27:45.677 [2024-11-19 13:19:48.993776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-11-19 13:19:48.993809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.677 qpair failed and we were unable to recover it. 00:27:45.677 [2024-11-19 13:19:48.993978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-11-19 13:19:48.994011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.677 qpair failed and we were unable to recover it. 00:27:45.677 [2024-11-19 13:19:48.994121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-11-19 13:19:48.994155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.677 qpair failed and we were unable to recover it. 00:27:45.677 [2024-11-19 13:19:48.994269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-11-19 13:19:48.994303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.677 qpair failed and we were unable to recover it. 00:27:45.677 [2024-11-19 13:19:48.994417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-11-19 13:19:48.994451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.677 qpair failed and we were unable to recover it. 00:27:45.677 [2024-11-19 13:19:48.994644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-11-19 13:19:48.994676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.677 qpair failed and we were unable to recover it. 00:27:45.677 [2024-11-19 13:19:48.994784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.677 [2024-11-19 13:19:48.994817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.677 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:48.994939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:48.995038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:48.995185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:48.995220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:48.995332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:48.995365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:48.995483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:48.995516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:48.995629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:48.995661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:48.995766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:48.995799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:48.995986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:48.996019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:48.996126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:48.996160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:48.996320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:48.996392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:48.996598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:48.996636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:48.996768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:48.996800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:48.996992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:48.997028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:48.997138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:48.997171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:48.997377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:48.997409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:48.997594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:48.997626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:48.997821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:48.997854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:48.997987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:48.998021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:48.999361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:48.999412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:48.999554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:48.999589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:48.999854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:48.999889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:49.000167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:49.000201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:49.000394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:49.000436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:49.000552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:49.000585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:49.000773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:49.000804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:49.000929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:49.000976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:49.001152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:49.001185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:49.001307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:49.001341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:49.001465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:49.001498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:49.001680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:49.001713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:49.002023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:49.002056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:49.002257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:49.002290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:49.002469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:49.002503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:49.002621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:49.002654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:49.002836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:49.002870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:49.002999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:49.003032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:49.003161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:49.003195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:49.003305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.969 [2024-11-19 13:19:49.003338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.969 qpair failed and we were unable to recover it. 00:27:45.969 [2024-11-19 13:19:49.003456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.003490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.003631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.003664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.003794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.003828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.004006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.004039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.004168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.004204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.004379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.004412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.004587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.004621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.004747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.004780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.007009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.007045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.007253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.007287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.007465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.007497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.007628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.007662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.007785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.007818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.008017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.008050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.008175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.008208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.008321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.008354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.008490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.008524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.008653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.008686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.008807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.008839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.008973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.009005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.009190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.009223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.009409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.009442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.009550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.009582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.009698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.009732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.009977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.010012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.010158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.010192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.010327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.010361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.010543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.010575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.010688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.010720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.010966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.010999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.011113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.011146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.011261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.011295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.011417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.011450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.011656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.011689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.011818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.011850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.012000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.012036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.012157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.012190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.012373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.012406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.012536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.012569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.012820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.012854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.013134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.013167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.013344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.013378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.013500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.013533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.013724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.013758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.013877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.013910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.014169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.014202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.014342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.014375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.014564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.014598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.014781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.014815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.014994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.015027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.015133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.970 [2024-11-19 13:19:49.015166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.970 qpair failed and we were unable to recover it. 00:27:45.970 [2024-11-19 13:19:49.015357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.015395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.015520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.015553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.015744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.015777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.015964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.015999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.016172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.016205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.016443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.016476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.016599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.016632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.016800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.016847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.017031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.017064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.017252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.017285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.017412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.017444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.017622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.017655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.017915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.017960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.018078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.018111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.018240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.018275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.018462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.018495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.018615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.018647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.018758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.018791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.018969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.019003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.019173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.019205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.019320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.019353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.019618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.019651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.019823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.019857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.019996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.020029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.020244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.020277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.020399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.020430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.020550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.020582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.020765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.020797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.020976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.021009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.021123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.021156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.021281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.021312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.021501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.021534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.021721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.021754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.021881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.021914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.022093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.022126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.022310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.022343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.022521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.022555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.022796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.022828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.023032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.023065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.023303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.023336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.023549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.023588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.023712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.023745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.023933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.023988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.024260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.024292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.971 [2024-11-19 13:19:49.024485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.971 [2024-11-19 13:19:49.024519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.971 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.024758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.024791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.025030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.025064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.025242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.025276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.025393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.025427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.025613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.025646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.025833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.025866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.026077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.026110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.026221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.026254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.026437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.026469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.026577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.026611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.026796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.026828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.027022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.027055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.027167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.027201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.027371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.027403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.027507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.027541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.027714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.027748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.027937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.027979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.028109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.028141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.028406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.028440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.028640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.028673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.028788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.028821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.028928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.028970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.029200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.029232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.029337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.029370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.029508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.029543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.029673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.029705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.029885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.029918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.030084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.030121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.030309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.030342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.030610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.030642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.030821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.030853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.031002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.031038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.031228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.031262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.031513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.031545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.972 [2024-11-19 13:19:49.031746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.972 [2024-11-19 13:19:49.031779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.972 qpair failed and we were unable to recover it. 00:27:45.973 [2024-11-19 13:19:49.031908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.973 [2024-11-19 13:19:49.031945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.973 qpair failed and we were unable to recover it. 00:27:45.973 [2024-11-19 13:19:49.032154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.032187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.032441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.032475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.032650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.032683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.032793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.032827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.033004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.033037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.033172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.033206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.033314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.033347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.033536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.033568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.033744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.033778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.033960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.033994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.034171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.034205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.034410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.034443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.034557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.034590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.034779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.034812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.034938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.034985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.035161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.035195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.035387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.035419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.035665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.035700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.035818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.035850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.036043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.036077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.036345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.036378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.036562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.036594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.036856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.036891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.037102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.037136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.037319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.037351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.037487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.037522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.974 [2024-11-19 13:19:49.037770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.974 [2024-11-19 13:19:49.037802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.974 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.038010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.038044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.038285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.038321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.038568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.038600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.038736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.038771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.038875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.038908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.039178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.039212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.039497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.039530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.039642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.039678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.039858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.039891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.040027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.040060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.040189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.040221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.040440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.040473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.040598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.040641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.040750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.040783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.040899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.040931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.041081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.041115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.041221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.041255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.041512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.041544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.041682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.041715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.041894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.041928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.042139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.042173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.042299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.042332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.042540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.042573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.042772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.042805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.042989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.043023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.043209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.043243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.043427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.043460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.043648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.043682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.043878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.043910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.044044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.044079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.044210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.044242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.044367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.044399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.044503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.044535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.044777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.044813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.044993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.045025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.045233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.045265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.045456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.045491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.045611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.045644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.045757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.045792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.045978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.046012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.046130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.046162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.046333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.046367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.046537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.046569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.046782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.046816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.046997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.047032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.047218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.047251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.047441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.047474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.047650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.047683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.047863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.047896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.048019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.048053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.048321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.048354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.048487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.048519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.048782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.048822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.048961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.048996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.049174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.049210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.049345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.049378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.049570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.049603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.049710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.049742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.049851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.049885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.050006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.050040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.050183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.050217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.050335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.050368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.050547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.050580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.050785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.050818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.050935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.050979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.051117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.051149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.051322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.051356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.051618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.051649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.051831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.051864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.052075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.052109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.052241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.052274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.052454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.052486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.052593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.052626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.052734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.052766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.052942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.052984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.053097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.053130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.053313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.053346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.053612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.053644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.053777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.053810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.054059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.054093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.054290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.054323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.054506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.054540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.054659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.054692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.054826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.054859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.055017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.055050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.055196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.055228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.055366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.975 [2024-11-19 13:19:49.055399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.975 qpair failed and we were unable to recover it. 00:27:45.975 [2024-11-19 13:19:49.055580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.055613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.055876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.055909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.056193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.056227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.056445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.056478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.056657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.056690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.056870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.056909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.057105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.057140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.057315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.057347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.057466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.057500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.057693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.057726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.057971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.058006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.058212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.058245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.058488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.058522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.058648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.058681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.058934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.058978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.059100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.059133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.059247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.059282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.059530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.059564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.059790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.059824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.059955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.059989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.060158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.060192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.060385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.060418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.060659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.060693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.060875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.060907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.061096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.061131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.061249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.061282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.061449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.061482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.061662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.061694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.061986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.062020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.062135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.062169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.062366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.062399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.062532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.062565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.062741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.062773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.062943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.062990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.063258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.063291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.063473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.063506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.063691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.063724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.063837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.063871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.064044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.064078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.064246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.064280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.064405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.064439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.064561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.064594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.064762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.064795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.064927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.064970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.065088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.065120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.065312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.065352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.065555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.065588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.065772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.065805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.066007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.066041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.066149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.066182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.066367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.066400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.066515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.066549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.066667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.066699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.066837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.066872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.067012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.067046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.067289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.067321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.067602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.067636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.067745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.067778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.067969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.068003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.068236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.068269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.068390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.068425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.068555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.068587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.068756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.068790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.068899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.068932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.069132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.069165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.069379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.069411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.069528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.069561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.069767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.069799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.069977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.070011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.070190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.070225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.070400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.070433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.070629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.070663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.070911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.070945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.071140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.071173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.071438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.071470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.071678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.071712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.071837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.071869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.072061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.072096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.072278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.072310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.072493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.072526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.072764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.072799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.976 [2024-11-19 13:19:49.072936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.976 [2024-11-19 13:19:49.073000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.976 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.073205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.073239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.073443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.073475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.073725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.073758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.073933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.073979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.074169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.074201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.074318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.074352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.074538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.074572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.074748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.074781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.074976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.075010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.075236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.075270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.075451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.075483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.075673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.075707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.075846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.075879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.076050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.076084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.076320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.076354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.076553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.076585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.076871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.076905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.077122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.077158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.077331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.077364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.077561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.077593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.077711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.077746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.077847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.077880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.078061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.078094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.078214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.078247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.078435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.078469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.078710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.078743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.078918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.078957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.079144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.079176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.079310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.079343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.079514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.079546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.079680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.079714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.079967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.080001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.080176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.080208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.080419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.080452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.080640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.080674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.080857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.080889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.081081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.081116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.081236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.081270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.081393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.081427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.081543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.081576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.081699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.081733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.081920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.081965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.082147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.082180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.082365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.082407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.082528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.082560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.082757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.082791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.082906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.082940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.083074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.083107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.083280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.083312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.083501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.083533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.083645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.083679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.083807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.083840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.084029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.084063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.084252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.084285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.084460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.084493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.084673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.084706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.084966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.085001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.085192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.085225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.085399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.085431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.085601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.085634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.085908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.085941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.086149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.086183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.086306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.086339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.086462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.086495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.086605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.086638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.086874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.086907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.087185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.087219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.087403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.087437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.087556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.087589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.087774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.087806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.087919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.087963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.088135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.088168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.088365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.088399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.088524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.088558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.088738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.088771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.088945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.088989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.089110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.089142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.089316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.089349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.089606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.089639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.089837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.089870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.089977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.090010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.090279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.090312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.090559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.090591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.090721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.090761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.090880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.090912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.091029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.091063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.091238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.091271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.977 [2024-11-19 13:19:49.091523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.977 [2024-11-19 13:19:49.091555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.977 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.091748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.091780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.092061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.092094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.092266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.092300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.092433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.092465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.092644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.092678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.092781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.092814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.092927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.092969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.093220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.093252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.093429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.093463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.093638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.093671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.093941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.093986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.094184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.094216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.094409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.094443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.094553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.094585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.094777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.094811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.094995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.095030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.095203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.095238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.095365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.095399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.095533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.095565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.095747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.095780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.096031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.096064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.096305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.096337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.096465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.096500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.096706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.096738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.096865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.096898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.097148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.097182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.097308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.097341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.097578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.097611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.097855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.097889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.098077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.098110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.098239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.098271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.098395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.098427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.098610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.098644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.098836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.098869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.099069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.099103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.099357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.099396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.099658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.099691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.099869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.099903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.100080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.100114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.100250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.100283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.100410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.100442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.100559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.100592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.100713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.100745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.100936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.100979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.101176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.101208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.101341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.101374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.101482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.101516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.101760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.101793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.101915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.101956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.102066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.102100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.102329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.102361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.102544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.102577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.102721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.102754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.102866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.102898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.103157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.103190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.103309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.103341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.103451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.103485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.103659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.103692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.103801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.103833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.104026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.104061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.104252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.104286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.104452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.104484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.104611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.104645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.104882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.104914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.105166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.105200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.105383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.105415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.105612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.105645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.105768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.105800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.105977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.106012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.106202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.106234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.106420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.106454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.106727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.106760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.106877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.106910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.107144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.107178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.107367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.107401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.107526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.107563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.107733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.107767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.107891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.107923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.108122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.108156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.108340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.108373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.978 [2024-11-19 13:19:49.108578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.978 [2024-11-19 13:19:49.108610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.978 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.108822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.108854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.108975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.109009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.109246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.109278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.109448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.109480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.109647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.109680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.109788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.109821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.109992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.110026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.110294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.110328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.110449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.110482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.110711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.110745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.110881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.110914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.111161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.111195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.111319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.111352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.111549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.111583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.111776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.111809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.111993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.112027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.112244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.112279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.112453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.112485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.112611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.112645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.112773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.112807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.112988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.113022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.113265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.113336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.113534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.113570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.113744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.113778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.113968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.114004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.114200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.114235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.114366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.114401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.114531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.114564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.114755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.114788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.114921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.114965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.115092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.115126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.115323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.115355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.115568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.115602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.115781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.115815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.116017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.116052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.116186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.116220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.116386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.116420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.116656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.116690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.116810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.116843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.116966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.117001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.117116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.117150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.117422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.117455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.117625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.117658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.117777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.117810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.117939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.117989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.118090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.118124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.118295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.118329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.118502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.118535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.118659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.118699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.118967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.119003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.119188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.119224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.119362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.119394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.119559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.119594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.119880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.119912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.120049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.120084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.120211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.120243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.120366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.120400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.120511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.120543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.120728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.120762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.120963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.120998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.121179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.121213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.121333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.121367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.121493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.121527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.121701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.121734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.121861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.121895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.122012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.122045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.122220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.122251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.122366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.122401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.122573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.122607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.122732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.122767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.122891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.122922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.123114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.123149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.123393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.123426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.123562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.123594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.123728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.123761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.124012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.124053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.124232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.124266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.124474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.124507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.124689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.124722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.124843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.124876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.125061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.125095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.125220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.125254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.125425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.125462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.125572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.125604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.125707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.125737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.125857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.125889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.126041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.126073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.126255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.126288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.126539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.979 [2024-11-19 13:19:49.126572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.979 qpair failed and we were unable to recover it. 00:27:45.979 [2024-11-19 13:19:49.126770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.126804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.126916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.126958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.127169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.127202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.127466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.127500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.127621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.127654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.127780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.127813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.127946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.127994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.128170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.128203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.128313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.128346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.128530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.128564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.128739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.128771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.128884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.128916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.129045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.129079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.129192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.129224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.129335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.129369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.129488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.129522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.129708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.129742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.129880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.129913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.130099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.130133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.130282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.130317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.130484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.130516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.130700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.130734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.130916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.130966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.131141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.131177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.131351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.131384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.131559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.131591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.131853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.131886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.132077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.132113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.132332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.132366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.132610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.132645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.132819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.132852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.133070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.133105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.133244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.133277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.133408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.133440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.133565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.133599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.133772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.133806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.133926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.133969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.134128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.134164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.134402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.134435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.134562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.134595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.134703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.134736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.134867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.134899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.135102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.135137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.135327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.135359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.135474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.135507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.135742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.135775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.135962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.135996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.136121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.136155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.136266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.136300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.136563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.136596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.136716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.136750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.136923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.136967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.137089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.137123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.137295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.137328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.137504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.137543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.137716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.137748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.137923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.137969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.138159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.138192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.138320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.138353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.138567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.138600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.138735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.138768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.139013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.139048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.139164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.139198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.139384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.139417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.139598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.139633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.139857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.139889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.140032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.140067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.140193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.140225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.140461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.140494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.140677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.140711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.140818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.140851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.141029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.141062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.141273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.141307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.141419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.141452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.141578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.141611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.141740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.141774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.141887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.141919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.142158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.142229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.142401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.142471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.142674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.142713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.142973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.143010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.143133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.143176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.143427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.143462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.980 [2024-11-19 13:19:49.143663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.980 [2024-11-19 13:19:49.143696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.980 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.143836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.143869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.144060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.144095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.144222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.144254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.144435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.144467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.144591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.144625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.144747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.144781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.144978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.145013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.145281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.145315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.145527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.145561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.145734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.145767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.145984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.146018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.146284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.146319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.146562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.146594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.146780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.146813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.146996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.147031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.147169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.147203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.147387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.147421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.147608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.147641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.147759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.147794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.148046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.148081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.148213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.148246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.148368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.148402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.148626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.148659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.148840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.148873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.149005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.149039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.149287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.149320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.149459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.149493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.149672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.149709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.149902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.149934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.150061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.150094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.150303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.150337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.150463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.150498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.150633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.150666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.150929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.150971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.151223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.151258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.151390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.151422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.151601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.151634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.151884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.151924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.152064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.152099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.152291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.152325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.152565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.152597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.152737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.152772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.152967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.153002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.153112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.153145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.153384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.153417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.153659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.153692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.153825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.153857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.153967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.154001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.154198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.154232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.154367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.154399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.154625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.154658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.154778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.154810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.154938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.154985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.155181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.155214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.155324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.155357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.155540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.155573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.155755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.155789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.155906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.155938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.156125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.156157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.156273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.156306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.156476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.156510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.156699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.156732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.156967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.157001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.157179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.157210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.157346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.157384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.157566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.157599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.157878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.157910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.158042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.158077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.158258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.158291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.158407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.158439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.158636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.158669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.158844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.158876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.158996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.159029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.159219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.159252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.159365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.159399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.159598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.159631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.159831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.159864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.160073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.160106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.160303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.160335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.160450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.160482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.160674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.160707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.160884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.160918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.161059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.161093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.161277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.161310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.161573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.161607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.161841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.161874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.162136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.162172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.981 [2024-11-19 13:19:49.162295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.981 [2024-11-19 13:19:49.162328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.981 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.162452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.162485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.162737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.162770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.162884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.162917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.163152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.163225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.163462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.163499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.163787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.163820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.164010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.164045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.164218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.164252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.164380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.164413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.164594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.164627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.164822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.164854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.165052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.165087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.165213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.165247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.165383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.165417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.165593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.165625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.165759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.165791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.165919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.165965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.166085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.166118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.166296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.166328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.166439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.166472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.166596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.166629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.166764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.166797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.167004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.167038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.167209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.167243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.167413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.167446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.167690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.167722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.167845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.167878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.168057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.168091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.168219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.168252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.168370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.168404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.168526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.168564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.168682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.168714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.168989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.169022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.169269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.169304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.169483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.169517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.169643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.169674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.169807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.169839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.170104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.170137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.170243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.170273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.170449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.170483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.170620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.170653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.170826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.170858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.170980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.171013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.171223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.171256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.171508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.171541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.171758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.171792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.171970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.172005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.172121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.172154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.172352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.172384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.172598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.172630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.172767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.172800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.173061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.173094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.173213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.173245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.173350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.173382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.173630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.173663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.173974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.174007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.174133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.174166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.174341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.174374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.174476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.174508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.174704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.174735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.174973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.175006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.175141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.175173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.175378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.175410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.175541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.175574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.175830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.175861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.175975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.176009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.176245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.176278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.176467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.176498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.176623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.176655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.176827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.176859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.177030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.177068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.177263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.177297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.177470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.177503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.177709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.177742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.177983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.178018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.178141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.178173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.178353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.178386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.178513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.178547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.178717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.178750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.178960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.178995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.179237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.179270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.982 [2024-11-19 13:19:49.179385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.982 [2024-11-19 13:19:49.179417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.982 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.179621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.179653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.179828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.179861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.180035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.180070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.180335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.180367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.180547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.180578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.180697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.180729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.180991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.181024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.181224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.181257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.181376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.181408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.181615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.181647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.181827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.181859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.182035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.182069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.182200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.182234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.182420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.182452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.182635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.182668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.182866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.182899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.183037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.183070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.183310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.183344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.183465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.183498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.183695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.183728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.183929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.183974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.184090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.184121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.184294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.184326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.184458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.184489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.184677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.184709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.184883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.184915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.185042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.185077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.185245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.185278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.185464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.185502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.185636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.185669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.185926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.185968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.186095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.186127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.186315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.186347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.186462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.186494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.186667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.186699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.186907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.186940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.187154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.187188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.187361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.187392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.187574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.187605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.187790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.187823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.188039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.188077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.188266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.188300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.188590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.188625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.188799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.188831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.189009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.189042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.189273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.189307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.189435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.189467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.189687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.189720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.189827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.189862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.190119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.190151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.190340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.190373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.190481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.190515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.190654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.190686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.190818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.190851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.191042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.191074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.191191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.191224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.191330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.191362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.191551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.191583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.191768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.191802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.191913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.191945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.192200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.192232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.192411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.192444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.192710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.192743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.192940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.192983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.193193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.193225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.193353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.193386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.193611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.193644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.193816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.193847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.194020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.194059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.194199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.194232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.194419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.194451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.194644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.194677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.194789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.194822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.195002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.195035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.195227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.195261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.195458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.195490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.195671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.195703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.195809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.195842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.196024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.196058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.196176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.196210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.196459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.196492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.196615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.196648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.196859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.196893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.197190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.197223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.197325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.197357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.983 qpair failed and we were unable to recover it. 00:27:45.983 [2024-11-19 13:19:49.197608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.983 [2024-11-19 13:19:49.197639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.197830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.197863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.198035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.198070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.198202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.198235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.198426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.198459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.198637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.198669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.198934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.198976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.199176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.199210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.199454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.199485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.199613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.199647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.199760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.199793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.199970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.200005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.200263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.200295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.200427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.200460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.200577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.200610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.200865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.200897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.201095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.201129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.201236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.201268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.201459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.201492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.201599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.201632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.201873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.201905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.202134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.202167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.202358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.202391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.202501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.202538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.202656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.202688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.202806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.202838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.202941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.202994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.203166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.203200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.203391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.203423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.203589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.203622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.203736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.203768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.203887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.203919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.204096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.204168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.204326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.204363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.204482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.204515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.204686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.204718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.204896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.204929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.205203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.205237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.205367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.205399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.205527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.205559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.205728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.205762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.205880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.205913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.206168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.206240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.206525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.206565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.206767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.206799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.206907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.206940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.207085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.207118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.207313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.207346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.207580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.207613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.207721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.207754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.207894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.207928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.208181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.208215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.208406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.208440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.208609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.208640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.208762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.208795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.208978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.209013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.209132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.209167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.209283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.209317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.209504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.209537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.209703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.209737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.209920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.209960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.210079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.210112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.210294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.210328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.210566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.210607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.210793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.210825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.210941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.210985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.211191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.211223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.211407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.211441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.211580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.211613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.211809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.211844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.211977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.212010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.212121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.212155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.212280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.212313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.212440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.212474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.212612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.212646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.212848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.212880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.213057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.213091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.213292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.213327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.213447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.213482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.213732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.213766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.214006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.214040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.214251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.214285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.214462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.214495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.984 qpair failed and we were unable to recover it. 00:27:45.984 [2024-11-19 13:19:49.214677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.984 [2024-11-19 13:19:49.214710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.214893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.214926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.215131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.215165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.215373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.215406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.215585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.215618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.215875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.215908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.216052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.216086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.216326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.216364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.216482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.216516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.216703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.216737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.216931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.216975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.217118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.217151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.217440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.217475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.217685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.217718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.217907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.217940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.218060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.218093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.218243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.218278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.218419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.218453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.218649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.218683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.218807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.218841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.219033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.219075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.219201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.219234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.219413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.219448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.219579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.219613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.219854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.219888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.220071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.220106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.220296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.220330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.220447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.220479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.220651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.220684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.220937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.220978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.221107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.221141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.221321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.221354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.221526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.221559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.221801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.221833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.221962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.221997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.222227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.222260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.222430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.222464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.222641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.222673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.222800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.222832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.222969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.223004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.223194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.223227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.223414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.223447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.223635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.223668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.223840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.223873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.224014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.224048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.224190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.224224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.224339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.224371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.224612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.224682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.224910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.224945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.225146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.225181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.225312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.225344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.225453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.225484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.225659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.225691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.225819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.225852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.226088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.226122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.226246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.226278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.226462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.226498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.226681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.226714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.226827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.226860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.226986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.227020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.227147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.227179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.227363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.227396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.227588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.227621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.227754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.227789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.227970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.228004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.228122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.228155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.228401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.228433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.228558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.228591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.228725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.228757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.228892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.228926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.229045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.229078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.229320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.229353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.229547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.229580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.229764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.229797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.229999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.230045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.230158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.230191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.230363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.230396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.230598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.230636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.230834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.230868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.231107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.231142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.231273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.231307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.231508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.231542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.231722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.231755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.232023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.232057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.232285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.232318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.232434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.232467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.232598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.232631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.985 [2024-11-19 13:19:49.232829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.985 [2024-11-19 13:19:49.232862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.985 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.233115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.233149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.233268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.233304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.233503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.233536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.233654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.233686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.233802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.233835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.233984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.234018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.234187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.234219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.234411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.234444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.234553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.234593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.234772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.234805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.235054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.235088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.235225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.235257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.235382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.235415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.235547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.235579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.235759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.235794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.235968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.236003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.236112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.236145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.236272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.236304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.236493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.236527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.236775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.236808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.236981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.237016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.237277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.237310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.237437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.237470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.237584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.237617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.237827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.237861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.238059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.238094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.238220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.238259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.238391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.238425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.238546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.238579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.238696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.238729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.238852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.238886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.239075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.239110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.239228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.239262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.239380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.239414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.239596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.239629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.239811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.239844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.239971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.240005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.240159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.240193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.240365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.240397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.240519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.240552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.240801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.240837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.241032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.241067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.241172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.241205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.241355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.241388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.241507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.241541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.241646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.241680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.241830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.241864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.241998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.242034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.242162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.242196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.242315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.242347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.242519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.242552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.242686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.242719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.242900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.242933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.243075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.243110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.243225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.243258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.243445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.243479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.243599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.243633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.243818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.243851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.244039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.244074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.244182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.244216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.244405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.244439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.244553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.244587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.244764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.244799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.244985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.245019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.245223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.245257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.245434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.245467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.245582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.245621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.245812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.245846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.246017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.246051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.246186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.246221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.246342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.246376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.246619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.246653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.246780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.246814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.246938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.246979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.251046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.251100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.251433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.251468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.251709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.251744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.251965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.252000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.252191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.252224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.252448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.252483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.252727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.252759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.986 [2024-11-19 13:19:49.252998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.986 [2024-11-19 13:19:49.253031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.986 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.253287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.253329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.253520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.253561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.253762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.253803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.254011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.254056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.254269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.254312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.254503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.254544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.254739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.254780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.254981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.255024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.255248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.255292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.255482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.255523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.255722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.255764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.256001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.256044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.256318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.256359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.256566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.256607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.256800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.256842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.257068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.257110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.257372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.257415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.257556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.257598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.257735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.257778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.257933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.258008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.258212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.258252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.258454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.258494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.258632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.258672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.258827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.258867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.259003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.259055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.259247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.259274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.259404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.259435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.259617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.259648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.259874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.259902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.260082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.260112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.260234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.260264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.260425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.260454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.260665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.260689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.260848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.260869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.261057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.261082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.261242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.261264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.261483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.261504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.261589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.261610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.261718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.261738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.261919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.261940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.262039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.262059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.262231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.262253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.262335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.262355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.262461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.262483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.262686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.262707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.262813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.262834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.262999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.263022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.263188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.263210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.263313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.263333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.263506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.263528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.263695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.263716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.263831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.263853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.263935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.263964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.264220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.264240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.264326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.264347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.264546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.264566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.264666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.264687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.264787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.264807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.264976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.265000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.265104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.265126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.265282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.265303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.265386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.265406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.265485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.265504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.265713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.265734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.265827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.265850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.266007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.266030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.266176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.266197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.266299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.266318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.266420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.266441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.266522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.266541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.266689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.266711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.266858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.266878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.267043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.267066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.267245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.267266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.267344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.267364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.267609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.267629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.267782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.267802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.267965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.267987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.268084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.268105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.268218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.268239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.268400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.268420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.268521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.268542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.268689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.268709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.268817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.268837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.268938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.268967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.269067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.269088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.269177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.269197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.269295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.987 [2024-11-19 13:19:49.269317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.987 qpair failed and we were unable to recover it. 00:27:45.987 [2024-11-19 13:19:49.269405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.269428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.269590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.269615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.269714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.269739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.269901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.269981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.270178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.270215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.270408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.270441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.270580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.270613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.270723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.270755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.270938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.270981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.271236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.271264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.271376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.271400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.271501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.271524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.271630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.271656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.271879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.271903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.272015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.272038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.272140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.272164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.272393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.272421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.272615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.272640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.272797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.272823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.272990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.273017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.273189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.273214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.273305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.273327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.273494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.273519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.273688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.273713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.273876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.273900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.274068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.274094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.274190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.274215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.274422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.274455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.274586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.274619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.274802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.274834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.274983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.275019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.275213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.275247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.275460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.275494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.275624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.275656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.275869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.275904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.276171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.276197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.276368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.276393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.276498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.276531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.276670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.276705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.276883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.276916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.277144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.277188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.277292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.277317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.277489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.277522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.277744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.277816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.278046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.278085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.278266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.278300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.278441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.278475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.278653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.278685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.278864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.278897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.279011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.279038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.279225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.279259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.279433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.279466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.279590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.279624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.279802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.279834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.280000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.280039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.280162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.280195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.280378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.280418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.280599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.280631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.280745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.280779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.280889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.280923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.281150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.281184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.281428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.281461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.281633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.281666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.281790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.281823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.282001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.282035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.282226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.282259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.282389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.282422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.282667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.282700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.282912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.282944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.283053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.283087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.283225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.283259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.283440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.283473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.283726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.283758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.283871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.283905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.284019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.284054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.284230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.284264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.284378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.284411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.284592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.284625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.284760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.284793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.284909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.284942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.285214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.285247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.285433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.285466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.285573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.988 [2024-11-19 13:19:49.285605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.988 qpair failed and we were unable to recover it. 00:27:45.988 [2024-11-19 13:19:49.285800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.285833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.285937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.285982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.286173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.286207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.286402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.286434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.286567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.286599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.286781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.286813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.287077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.287111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.287233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.287265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.287444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.287477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.287675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.287708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.287880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.287913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.288148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.288182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.288379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.288413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.288534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.288567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.288761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.288795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.288983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.289016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.289221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.289255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.289424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.289457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.289633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.289666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.289786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.289819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.290013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.290046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.290169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.290202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.290309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.290343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.290451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.290483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.290697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.290729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.290852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.290885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.291103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.291137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.291278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.291311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.291491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.291524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.291761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.291793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.291987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.292022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.292210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.292244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.292369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.292402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.292585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.292617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.292722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.292755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.292863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.292896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.293096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.293130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.293319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.293351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.293466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.293499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.293680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.293712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.293884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.293921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.294052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.294085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.294190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.294223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.294402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.294435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.294689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.294722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.294979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.295013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.295206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.295239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.295413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.295446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.295728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.295762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.295875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.295907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.296127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.296162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.296345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.296378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.296501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.296533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.296713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.296746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.296932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.296978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.297161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.297194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.297416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.297448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.297707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.297741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.297982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.298015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.298187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.298219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.298486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.298520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.298724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.298758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.298955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.298988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.299178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.299212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.299328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.299360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.299525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.299559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.299805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.299837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.299961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.299994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.300186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.300219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.300355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.300388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.300564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.300597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.300731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.300765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.300877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.300910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.301096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.301129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.301370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.301403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.301647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.301681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.301852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.301885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.302071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.302105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.302348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.302380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.302569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.302603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.302732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.302770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.302887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.302918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.303066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.303100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.303272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.303305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.303443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.303475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.303603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.303635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.303806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.303839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.989 qpair failed and we were unable to recover it. 00:27:45.989 [2024-11-19 13:19:49.303944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.989 [2024-11-19 13:19:49.303997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.304111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.304143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.304323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.304356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.304529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.304563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.304729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.304762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.304890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.304924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.305060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.305094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.305311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.305346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.305542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.305575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.305748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.305782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.305890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.305922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.306067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.306100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.306309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.306342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.306482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.306516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.306642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.306675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.306862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.306895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.307026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.307062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.307195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.307228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.307352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.307385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.307499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.307534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.307749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.307783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.307905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.307939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.308165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.308200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.308374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.308407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.308694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.308728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.308907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.308941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.309063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.309097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.309285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.309319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.309527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.309561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.309798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.309832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.310012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.310046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.310175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.310208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.310320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.310354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.310473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.310512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.310649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.310685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.310864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.310898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.311167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.311200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.311314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.311347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.311521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.311554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.311682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.311715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.311896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.311928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.312065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.312099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.312338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.312370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.312584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.312618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.312861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.312894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.313028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.313062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.313337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.313369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.313495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.313529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.313667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.313699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.313906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.313938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.314125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.314158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.314377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.314410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.314639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.314671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.314894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.314928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.315118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.315153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.315275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.315307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.315496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.315530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.315742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.315775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.315893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.315927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.316065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.316100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.316302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.316334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.316450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.316483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.316600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.316634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.316812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.316846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.317015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.317050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.317174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.317207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.317470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.990 [2024-11-19 13:19:49.317504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:45.990 qpair failed and we were unable to recover it. 00:27:45.990 [2024-11-19 13:19:49.317743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.317775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.317958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.317993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.318129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.318164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.318347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.318380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.318564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.318598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.318778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.318811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.319001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.319041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.319148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.319182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.319283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.319316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.319509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.319543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.319735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.319769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.320000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.320034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.320158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.320193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.320365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.320399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.320519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.320553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.320740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.320773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.320881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.320914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.321162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.321196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.321374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.321407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.321583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.321616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.321755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.321788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.321980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.322015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.322143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.322177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.322440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.322473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.322647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.322680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.322807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.322840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.322946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.322989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.323256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.323289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.323476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.323510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.323767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.277 [2024-11-19 13:19:49.323801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.277 qpair failed and we were unable to recover it. 00:27:46.277 [2024-11-19 13:19:49.324044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.324078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.324281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.324314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.324446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.324480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.324596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.324630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.324918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.324959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.325066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.325099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.325285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.325319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.325490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.325523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.325711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.325744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.325871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.325904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.326038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.326072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.326247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.326281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.326456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.326489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.326664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.326697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.326820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.326854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.327088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.327124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.327296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.327334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.327547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.327580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.327761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.327796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.328041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.328076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.328188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.328221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.328329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.328363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.328601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.328634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.328871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.328904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.329180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.329215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.329335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.329369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.329475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.329508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.329626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.329660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.329831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.329864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.330110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.330145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.330267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.330301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.330473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.330507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.330746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.330780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.330895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.330926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.331062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.331095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.331304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.331337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.331584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.331616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.278 [2024-11-19 13:19:49.331857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.278 [2024-11-19 13:19:49.331890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.278 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.332102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.332136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.332334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.332367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.332566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.332599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.332774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.332807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.332980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.333014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.333186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.333220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.333408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.333440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.333611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.333643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.333766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.333799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.333911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.333944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.334124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.334157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.334328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.334361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.334546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.334579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.334753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.334785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.334981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.335015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.335127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.335160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.335345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.335377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.335556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.335589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.335729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.335767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.335990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.336023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.336148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.336182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.336417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.336451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.336571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.336604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.336773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.336806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.336987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.337021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.337220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.337254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.337434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.337468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.337641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.337675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.337805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.337839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.337978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.338012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.338147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.338181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.338355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.338388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.338521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.338554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.338786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.338819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.339002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.339036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.339159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.339192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.339385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.339417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.279 qpair failed and we were unable to recover it. 00:27:46.279 [2024-11-19 13:19:49.339556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.279 [2024-11-19 13:19:49.339589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.339796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.339828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.340006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.340040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.340162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.340194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.340363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.340396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.340573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.340606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.340790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.340823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.341094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.341129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.341322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.341355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.341459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.341492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.341682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.341715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.341822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.341854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.341992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.342026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.342242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.342276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.342464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.342497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.342676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.342709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.342918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.342960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.343147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.343180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.343364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.343397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.343574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.343608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.343735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.343768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.343964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.344003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.344248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.344281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.344453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.344486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.344601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.344634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.344897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.344931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.345134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.345167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.345357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.345390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.345560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.345593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.345786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.345819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.345930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.345972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.346146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.346180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.346469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.346501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.346611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.346644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.346905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.346939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.347147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.347181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.347313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.347347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.280 [2024-11-19 13:19:49.347598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.280 [2024-11-19 13:19:49.347630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.280 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.347812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.347845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.348085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.348120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.348253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.348286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.348521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.348554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.348691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.348724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.348986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.349020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.349200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.349233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.349341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.349374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.349483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.349514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.349786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.349819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.349966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.350000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.350203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.350235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.350369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.350401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.350587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.350620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.350824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.350857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.351030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.351064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.351174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.351207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.351465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.351498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.351682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.351715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.351901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.351934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.352154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.352186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.352453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.352486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.352606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.352639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.352808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.352846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.352981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.353016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.353198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.353231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.353375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.353407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.353596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.353629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.353750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.353783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.353908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.353940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.281 [2024-11-19 13:19:49.354146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.281 [2024-11-19 13:19:49.354180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.281 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.354439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.354472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.354689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.354722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.354911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.354944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.355146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.355180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.355419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.355452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.355574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.355607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.355714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.355748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.355960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.355994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.356254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.356287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.356469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.356501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.356741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.356773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.356986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.357020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.357295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.357329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.357454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.357487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.357691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.357724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.357849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.357882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.358123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.358157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.358404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.358436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.358630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.358664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.358791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.358824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.359061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.359095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.359365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.359398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.359532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.359564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.359754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.359787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.359963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.359998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.360181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.360213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.360339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.360373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.360505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.360538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.360804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.360838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.360981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.361014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.361137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.361171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.361290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.361323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.361605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.361643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.361825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.361858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.361977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.362010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.362193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.362226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.362412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.362446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.282 qpair failed and we were unable to recover it. 00:27:46.282 [2024-11-19 13:19:49.362580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.282 [2024-11-19 13:19:49.362612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.362875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.362908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.363122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.363156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.363340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.363374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.363495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.363527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.363655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.363689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.363815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.363848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.364023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.364056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.364233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.364267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.364453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.364487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.364597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.364630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.364735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.364769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.364940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.365001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.365188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.365221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.365462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.365495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.365603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.365636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.365810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.365843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.366016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.366051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.366230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.366263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.366530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.366562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.366736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.366769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.366890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.366924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.367122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.367155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.367325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.367358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.367537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.367570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.367692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.367725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.367922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.367963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.368223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.368255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.368437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.368470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.368656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.368689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.368855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.368888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.369020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.369054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.369290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.369323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.369517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.369550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.369738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.369772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.370010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.370049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.370222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.370254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.370443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.370476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.283 qpair failed and we were unable to recover it. 00:27:46.283 [2024-11-19 13:19:49.370645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.283 [2024-11-19 13:19:49.370679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.370860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.370893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.371089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.371123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.371320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.371352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.371536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.371569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.371784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.371816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.371945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.371988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.372166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.372200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.372379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.372412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.372607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.372640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.372901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.372934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.373206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.373239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.373353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.373386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.373512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.373545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.373650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.373682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.373923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.373968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.374173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.374206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.374376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.374410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.374605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.374638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.374811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.374843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.374977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.375012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.375200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.375233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.375516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.375549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.375680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.375712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.375893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.375927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.376093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.376127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.376236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.376268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.376446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.376480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.376607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.376640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.376826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.376858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.376968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.377003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.377182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.377214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.377381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.377413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.377671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.377704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.284 qpair failed and we were unable to recover it. 00:27:46.284 [2024-11-19 13:19:49.377913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.284 [2024-11-19 13:19:49.377945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.378122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.378155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.378278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.378310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.378422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.378460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.378723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.378755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.378927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.378969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.379152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.379184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.379377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.379409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.379595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.379627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.379796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.379829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.380089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.380123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.380317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.380349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.380520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.380552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.380666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.380698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.380812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.380844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.380982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.381016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.381220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.381253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.381435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.381468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.381583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.381615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.381799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.381832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.381965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.381999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.382174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.382205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.382378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.382410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.382512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.382542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.382651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.382683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.382782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.382814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.383020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.383055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.383253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.383285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.383394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.383426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.383662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.383694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.383908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.383942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.384072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.384105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.384287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.384319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.384575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.384609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.384805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.384837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.385097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.385132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.385338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.385372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.385488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.285 [2024-11-19 13:19:49.385520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.285 qpair failed and we were unable to recover it. 00:27:46.285 [2024-11-19 13:19:49.385637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.385669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.385841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.385874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.386056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.386090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.386226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.386260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.386445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.386479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.386606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.386645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.386773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.386806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.386917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.386957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.387152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.387184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.387298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.387331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.387513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.387546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.387747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.387781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.387903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.387936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.388183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.388217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.388428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.388460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.388664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.388697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.388872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.388905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.389148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.389182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.389380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.389413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.389524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.389557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.389695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.389728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.389923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.389968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.390165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.390198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.390383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.390416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.390589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.390621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.390811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.390844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.391031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.391066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.391192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.391225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.391353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.391387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.391651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.391685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.391869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.391901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.392021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.392055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.286 [2024-11-19 13:19:49.392235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.286 [2024-11-19 13:19:49.392269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.286 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.392386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.392419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.392607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.392639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.392810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.392843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.393100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.393133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.393315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.393349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.393522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.393555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.393745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.393778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.394044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.394078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.394265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.394298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.394553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.394586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.394782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.394814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.394945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.394989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.395273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.395306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.395491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.395524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.395696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.395730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.395846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.395879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.396069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.396103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.396219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.396252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.396369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.396402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.396662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.396695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.396880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.396912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.397112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.397146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.397381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.397414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.397553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.397586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.397699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.397732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.397931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.397972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.398191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.398225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.287 [2024-11-19 13:19:49.398403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.287 [2024-11-19 13:19:49.398436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.287 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.398615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.398648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.398819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.398852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.398989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.399023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.399205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.399238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.399430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.399464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.399666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.399698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.399969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.400004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.400176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.400208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.400406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.400439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.400634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.400666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.400855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.400887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.401068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.401108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.401376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.401408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.401524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.401557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.401737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.401770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.401968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.402002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.402187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.402220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.402335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.402368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.402544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.402576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.402773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.402806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.402917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.402961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.403088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.403121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.403233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.403267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.403460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.403494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.403732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.403765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.403968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.404003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.404136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.288 [2024-11-19 13:19:49.404170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.288 qpair failed and we were unable to recover it. 00:27:46.288 [2024-11-19 13:19:49.404419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.404452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.404659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.404693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.404929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.404972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.405081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.405114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.405241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.405274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.405391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.405423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.405591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.405624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.405836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.405869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.406000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.406034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.406297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.406330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.406539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.406572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.406707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.406740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.406854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.406887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.407135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.407170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.407433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.407465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.407670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.407703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.407821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.407854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.407971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.408004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.408178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.408212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.408388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.408421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.408682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.408714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.408906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.408939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.409207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.409241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.409428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.409461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.409632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.409671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.409787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.409820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.409992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.289 [2024-11-19 13:19:49.410026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.289 qpair failed and we were unable to recover it. 00:27:46.289 [2024-11-19 13:19:49.410202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.410235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.410350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.410382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.410558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.410590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.410760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.410793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.411016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.411050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.411286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.411318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.411427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.411460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.411648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.411681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.411862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.411895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.412034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.412068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.412255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.412287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.412473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.412506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.412689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.412722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.412895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.412927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.413078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.413112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.413295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.413327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.413521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.413554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.413818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.413850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.414033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.414067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.414339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.414372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.414586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.414618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.414794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.414827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.415027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.415061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.415189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.415222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.415408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.415442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.415616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.290 [2024-11-19 13:19:49.415648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.290 qpair failed and we were unable to recover it. 00:27:46.290 [2024-11-19 13:19:49.415822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.415855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.416057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.416090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.416224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.416257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.416464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.416497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.416737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.416771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.416940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.417000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.417124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.417156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.417352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.417385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.417575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.417608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.417850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.417882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.418071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.418105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.418280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.418318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.418529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.418561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.418701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.418734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.418857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.418890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.419074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.419108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.419227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.419260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.419431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.419464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.419597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.419629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.419802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.419834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.420006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.420040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.420161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.420193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.420456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.420488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.420791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.420824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.420939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.420980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.421173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.421206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.421487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.421520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.291 qpair failed and we were unable to recover it. 00:27:46.291 [2024-11-19 13:19:49.421781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.291 [2024-11-19 13:19:49.421812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.292 qpair failed and we were unable to recover it. 00:27:46.292 [2024-11-19 13:19:49.421985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.292 [2024-11-19 13:19:49.422019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.292 qpair failed and we were unable to recover it. 00:27:46.292 [2024-11-19 13:19:49.422209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.292 [2024-11-19 13:19:49.422242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.292 qpair failed and we were unable to recover it. 00:27:46.292 [2024-11-19 13:19:49.422457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.292 [2024-11-19 13:19:49.422490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.292 qpair failed and we were unable to recover it. 00:27:46.292 [2024-11-19 13:19:49.422682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.292 [2024-11-19 13:19:49.422715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.292 qpair failed and we were unable to recover it. 00:27:46.292 [2024-11-19 13:19:49.422971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.292 [2024-11-19 13:19:49.423005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.292 qpair failed and we were unable to recover it. 00:27:46.292 [2024-11-19 13:19:49.423144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.292 [2024-11-19 13:19:49.423176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.292 qpair failed and we were unable to recover it. 00:27:46.292 [2024-11-19 13:19:49.423298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.292 [2024-11-19 13:19:49.423331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.292 qpair failed and we were unable to recover it. 00:27:46.292 [2024-11-19 13:19:49.423515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.292 [2024-11-19 13:19:49.423548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.292 qpair failed and we were unable to recover it. 00:27:46.292 [2024-11-19 13:19:49.423671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.292 [2024-11-19 13:19:49.423704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.292 qpair failed and we were unable to recover it. 00:27:46.292 [2024-11-19 13:19:49.423891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.292 [2024-11-19 13:19:49.423924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.292 qpair failed and we were unable to recover it. 00:27:46.292 [2024-11-19 13:19:49.424072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.292 [2024-11-19 13:19:49.424107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.292 qpair failed and we were unable to recover it. 00:27:46.292 [2024-11-19 13:19:49.424345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.292 [2024-11-19 13:19:49.424378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.292 qpair failed and we were unable to recover it. 00:27:46.292 [2024-11-19 13:19:49.424566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.292 [2024-11-19 13:19:49.424599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.292 qpair failed and we were unable to recover it. 00:27:46.292 [2024-11-19 13:19:49.424834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.292 [2024-11-19 13:19:49.424867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.292 qpair failed and we were unable to recover it. 00:27:46.292 [2024-11-19 13:19:49.425072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.292 [2024-11-19 13:19:49.425106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.292 qpair failed and we were unable to recover it. 00:27:46.292 [2024-11-19 13:19:49.425336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.292 [2024-11-19 13:19:49.425370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.292 qpair failed and we were unable to recover it. 00:27:46.292 [2024-11-19 13:19:49.425634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.292 [2024-11-19 13:19:49.425667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.292 qpair failed and we were unable to recover it. 00:27:46.292 [2024-11-19 13:19:49.425798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.292 [2024-11-19 13:19:49.425830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.292 qpair failed and we were unable to recover it. 00:27:46.292 [2024-11-19 13:19:49.426014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.292 [2024-11-19 13:19:49.426048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.292 qpair failed and we were unable to recover it. 00:27:46.292 [2024-11-19 13:19:49.426311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.292 [2024-11-19 13:19:49.426345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.292 qpair failed and we were unable to recover it. 00:27:46.292 [2024-11-19 13:19:49.426533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.292 [2024-11-19 13:19:49.426565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.292 qpair failed and we were unable to recover it. 00:27:46.292 [2024-11-19 13:19:49.426738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.292 [2024-11-19 13:19:49.426771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.292 qpair failed and we were unable to recover it. 00:27:46.292 [2024-11-19 13:19:49.426942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.292 [2024-11-19 13:19:49.426983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.292 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.427095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.427133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.427306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.427340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.427470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.427502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.427672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.427705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.427907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.427940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.428187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.428221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.428324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.428356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.428547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.428580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.428821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.428854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.429047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.429082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.429205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.429237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.429425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.429459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.429630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.429663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.429901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.429933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.430188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.430222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.430340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.430373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.430488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.430521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.430629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.430662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.430848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.430881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.431136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.431171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.431377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.431409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.431614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.431647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.431836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.431869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.431995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.432028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.432201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.432234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.432360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.432393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.432526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.432559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.432746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.432780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.433022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.433056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.433242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.433276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.293 qpair failed and we were unable to recover it. 00:27:46.293 [2024-11-19 13:19:49.433449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.293 [2024-11-19 13:19:49.433481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.433669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.433702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.433821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.433855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.434042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.434076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.434255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.434287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.434502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.434534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.434720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.434753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.434935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.434979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.435228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.435261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.435475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.435508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.435693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.435730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.435988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.436022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.436281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.436315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.436425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.436457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.436716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.436750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.436873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.436906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.437106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.437139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.437407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.437439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.437624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.437657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.437878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.437910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.438109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.438143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.438272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.438306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.438429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.438462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.438723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.438756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.438878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.438912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.439163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.439197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.439320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.439353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.439532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.439565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.439702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.439734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.439938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.439998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.440106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.440139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.440252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.440285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.440402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.440435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.440643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.440675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.440866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.440899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.441087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.441120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.441357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.294 [2024-11-19 13:19:49.441390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.294 qpair failed and we were unable to recover it. 00:27:46.294 [2024-11-19 13:19:49.441567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.441600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.441794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.441827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.442075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.442110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.442370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.442402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.442594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.442627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.442813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.442846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.443032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.443067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.443185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.443218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.443477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.443510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.443706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.443740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.443914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.443965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.444177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.444210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.444405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.444438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.444654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.444692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.444809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.444841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.445013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.445047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.445310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.445343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.445522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.445555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.445746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.445778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.445966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.446000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.446246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.446280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.446473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.446507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.446681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.446714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.446907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.446939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.447060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.447093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.447303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.447336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.447527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.447560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.447764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.447798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.448039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.448073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.448273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.448306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.448489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.448523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.448779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.448813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.449027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.449060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.449245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.449278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.449483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.449518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.449722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.449755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.449943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.449985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.450107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.295 [2024-11-19 13:19:49.450140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.295 qpair failed and we were unable to recover it. 00:27:46.295 [2024-11-19 13:19:49.450269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.450301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.450496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.450529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.450732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.450765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.450963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.450998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.451175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.451208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.451408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.451441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.451616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.451649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.451851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.451884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.452018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.452052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.452317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.452351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.452565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.452596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.452794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.452827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.452999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.453033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.453167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.453201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.453406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.453439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.453609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.453660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.453785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.453819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.454005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.454039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.454220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.454252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.454425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.454459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.454717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.454750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.455000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.455034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.455242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.455274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.455450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.455483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.455664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.455696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.455873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.455919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.456125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.456159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.456334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.456367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.456606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.456638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.456830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.456863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.457144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.457177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.457368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.457402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.457681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.457713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.457899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.457931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.458126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.458158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.458329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.458362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.458534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.296 [2024-11-19 13:19:49.458566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.296 qpair failed and we were unable to recover it. 00:27:46.296 [2024-11-19 13:19:49.458680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.458712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.458976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.459009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.459224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.459257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.459370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.459401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.459576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.459608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.459803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.459836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.460041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.460075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.460284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.460317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.460524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.460556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.460746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.460778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.460883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.460916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.461100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.461133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.461346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.461378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.461621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.461653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.461893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.461924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.462106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.462139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.462328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.462360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.462573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.462605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.462735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.462774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.462983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.463018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.463132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.463164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.463281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.463314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.463442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.463474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.463594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.463626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.463930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.463971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.464174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.464206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.464379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.464412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.464603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.464636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.464822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.464854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.465061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.465094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.465309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.465341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.465518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.465550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.465754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.465786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.297 [2024-11-19 13:19:49.465984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.297 [2024-11-19 13:19:49.466018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.297 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.466195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.466228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.466332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.466364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.466573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.466606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.466735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.466767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.466937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.466981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.467197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.467230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.467428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.467461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.467645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.467677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.467852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.467884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.468005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.468039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.468169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.468201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.468323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.468357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.468565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.468598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.468724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.468756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.468929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.468972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.469163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.469195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.469301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.469333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.469507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.469539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.469671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.469703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.469835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.469867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.470043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.470078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.470183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.470216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.470393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.470425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.470542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.470574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.470746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.470784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.471046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.471080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.471268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.471299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.471549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.471583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.471798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.471831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.471933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.471975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.472110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.472143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.472347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.472379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.472564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.472596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.472783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.472816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.473005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.473038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.473229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.473261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.473384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.473417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.298 qpair failed and we were unable to recover it. 00:27:46.298 [2024-11-19 13:19:49.473680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.298 [2024-11-19 13:19:49.473712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.473933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.473989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.474116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.474148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.474340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.474373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.474555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.474588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.474788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.474821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.475013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.475047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.475220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.475253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.475537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.475570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.475746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.475779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.475978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.476011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.476218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.476251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.476437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.476470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.476703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.476735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.476917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.476969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.477163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.477197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.477374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.477405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.477595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.477627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.477811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.477843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.478017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.478050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.478168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.478200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.478373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.478405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.478668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.478700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.478872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.478905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.479174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.479208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.479337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.479369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.479632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.479664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.479800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.479838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.479968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.480001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.480137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.480171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.480286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.480319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.480529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.480562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.480827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.480860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.481096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.481129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.481384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.481417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.481590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.481623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.481795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.481827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.481985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.482019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.299 qpair failed and we were unable to recover it. 00:27:46.299 [2024-11-19 13:19:49.482259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.299 [2024-11-19 13:19:49.482291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.482464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.482496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.482701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.482734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.482981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.483015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.483200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.483232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.483418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.483450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.483667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.483699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.483816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.483848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.484085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.484118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.484302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.484334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.484471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.484504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.484748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.484782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.484913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.484945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.485150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.485183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.485382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.485414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.485546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.485577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.485770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.485804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.485994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.486027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.486275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.486308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.486570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.486602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.486782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.486813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.486944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.486984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.487188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.487221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.487351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.487383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.487556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.487589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.487706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.487738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.487999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.488032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.488232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.488264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.488392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.488425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.488598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.488636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.488752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.488784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.489026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.489060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.489192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.489224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.489407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.489440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.489558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.489590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.489840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.489873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.490055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.490089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.300 qpair failed and we were unable to recover it. 00:27:46.300 [2024-11-19 13:19:49.490350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.300 [2024-11-19 13:19:49.490382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.490647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.490679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.490919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.490961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.491136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.491168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.491373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.491405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.491588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.491620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.491742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.491775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.492015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.492049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.492288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.492321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.492428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.492461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.492587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.492619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.492737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.492769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.492879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.492912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.493161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.493194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.493316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.493348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.493530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.493563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.493680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.493712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.493824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.493856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.494099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.494133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.494365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.494437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.494684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.494756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.494909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.494945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.495225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.495258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.495371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.495405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.495673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.495707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.495830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.495863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.496038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.496072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.496258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.496291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.496540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.496573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.496747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.496780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.496970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.497004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.497131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.497164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.497407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.497449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.497698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.497731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.497861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.497894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.498035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.498069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.498244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.498276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.498395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.498427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.301 qpair failed and we were unable to recover it. 00:27:46.301 [2024-11-19 13:19:49.498549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.301 [2024-11-19 13:19:49.498582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.498716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.498748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.498872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.498904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.499153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.499187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.499310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.499343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.499531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.499563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.499825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.499857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.500039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.500073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.500217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.500254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.500368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.500401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.500659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.500691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.500816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.500850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.501033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.501067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.501244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.501277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.501407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.501440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.501575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.501609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.501808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.501840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.502046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.502078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.502280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.502313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.502446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.502480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.502729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.502761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.503018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.503090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.503297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.503334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.503523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.503557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.503682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.503715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.503937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.503982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.504173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.504206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.504321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.504354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.504462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.504495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.504674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.504708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.504887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.504919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.505109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.505143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.505353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.505386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.505575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.302 [2024-11-19 13:19:49.505609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.302 qpair failed and we were unable to recover it. 00:27:46.302 [2024-11-19 13:19:49.505841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.505875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.506012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.506047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.506255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.506288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.506463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.506497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.506636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.506669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.506799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.506831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.506959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.506992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.507202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.507235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.507501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.507535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.507675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.507708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.507838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.507871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.508004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.508037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.508228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.508262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.508520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.508553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.508743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.508777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.508959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.508994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.509117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.509150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.509384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.509417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.509607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.509641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.509767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.509799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.510002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.510037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.510279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.510313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.510503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.510537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.510667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.510701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.510878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.510909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.511160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.511194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.511382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.511415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.511526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.511564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.511760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.511793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.511971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.512004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.512108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.512143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.512347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.512381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.512503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.512536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.512649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.512681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.512858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.512892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.513021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.513056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.513235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.513268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.513468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.303 [2024-11-19 13:19:49.513501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.303 qpair failed and we were unable to recover it. 00:27:46.303 [2024-11-19 13:19:49.513677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.513710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.513914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.513946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.514154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.514189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.514330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.514363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.514502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.514535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.514759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.514791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.514912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.514945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.515152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.515185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.515315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.515348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.515467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.515500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.515609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.515642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.515753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.515785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.515976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.516011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.516270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.516303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.516540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.516573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.516745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.516778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.516985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.517018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.517154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.517189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.517391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.517423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.517532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.517567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.517687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.517719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.517850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.517883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.518064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.518099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.518207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.518241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.518356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.518390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.518583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.518617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.518885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.518917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.519043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.519077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.519265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.519297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.519404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.519443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.519621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.519654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.519913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.519957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.520252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.520286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.520426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.520460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.520645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.520677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.520862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.520895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.521157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.521191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.521389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.304 [2024-11-19 13:19:49.521423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.304 qpair failed and we were unable to recover it. 00:27:46.304 [2024-11-19 13:19:49.521544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.521576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.521844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.521877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.522056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.522090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.522306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.522339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.522576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.522609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.522853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.522886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.523134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.523168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.523303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.523336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.523465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.523497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.523748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.523781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.523967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.524001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.524131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.524163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.524342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.524374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.524484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.524517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.524703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.524737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.524866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.524899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.525112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.525147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.525327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.525359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.525603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.525635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.525768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.525801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.525999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.526035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.526161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.526195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.526407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.526442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.526626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.526659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.526896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.526929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.527042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.527076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.527207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.527241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.527480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.527511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.527624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.527658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.527834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.527867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.527990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.528024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.528139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.528178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.528471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.528504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.528652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.528685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.528863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.528896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.529045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.529081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.529317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.529351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.305 [2024-11-19 13:19:49.529482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-11-19 13:19:49.529516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.305 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.529695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.529728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.529851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.529885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.530131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.530165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.530351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.530383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.530572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.530605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.530710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.530744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.530930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.530993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.531191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.531225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.531469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.531502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.531609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.531643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.531865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.531899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.532045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.532079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.532207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.532240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.532433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.532466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.532592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.532624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.532805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.532839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.533034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.533068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.533277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.533310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.533445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.533478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.533671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.533704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.533893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.533926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.534152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.534184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.534308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.534341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.534515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.534548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.534788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.534820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.535057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.535090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.535376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.535409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.535588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.535621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.535836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.535869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.535978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.536011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.536129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.536162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.536398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.536430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.536570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-11-19 13:19:49.536603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.306 qpair failed and we were unable to recover it. 00:27:46.306 [2024-11-19 13:19:49.536725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.536765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.537032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.537066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.537196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.537229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.537421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.537453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.537584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.537618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.537800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.537832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.538007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.538041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.538150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.538184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.538356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.538388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.538600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.538633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.538825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.538858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.539098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.539133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.539372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.539405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.539589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.539622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.539846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.539880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.539995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.540030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.540162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.540195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.540318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.540352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.540481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.540514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.540628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.540661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.540840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.540873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.540977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.541011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.541197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.541231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.541489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.541520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.541657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.541690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.541869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.541902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.542055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.542089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.542360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.542393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.542576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.542609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.542786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.542820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.542944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.542986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.543099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.543131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.543267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.543301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.543537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.543569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.543747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.543780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.543904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.543938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.544073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.307 [2024-11-19 13:19:49.544105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.307 qpair failed and we were unable to recover it. 00:27:46.307 [2024-11-19 13:19:49.544223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.544255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.544523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.544555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.544750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.544783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.544998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.545037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.545182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.545216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.545461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.545493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.545616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.545650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.545759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.545791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.545978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.546012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.546137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.546170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.546429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.546462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.546634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.546667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.546787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.546820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.547009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.547043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.547231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.547265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.547376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.547408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.547668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.547701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.547811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.547843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.547980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.548014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.548124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.548158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.548359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.548391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.548573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.548606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.548777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.548810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.548989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.549023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.549206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.549240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.549407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.549440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.549547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.549580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.549685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.549718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.549962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.549996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.550127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.550164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.550299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.550332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.550511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.550543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.550728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.550761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.551015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.551049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.551155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.551188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.551319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.551351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.551477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.551510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.551699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.308 [2024-11-19 13:19:49.551732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.308 qpair failed and we were unable to recover it. 00:27:46.308 [2024-11-19 13:19:49.551859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.551892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.552078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.552117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.552380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.552412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.552662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.552695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.552800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.552834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.553036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.553074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.553193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.553225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.553332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.553366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.553619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.553650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.553831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.553864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.553976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.554009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.554186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.554219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.554356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.554389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.554493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.554525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.554767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.554801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.554989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.555024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.555213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.555247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.555425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.555457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.555645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.555679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.555866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.555900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.556148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.556182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.556389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.556422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.556609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.556641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.556825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.556858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.557041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.557074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.557197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.557229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.557517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.557550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.557818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.557851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.558029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.558061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.558267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.558299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.558476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.558509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.558708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.558739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.558944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.558988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.559225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.559258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.559442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.559475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.559601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.559633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.559804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.559838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.309 [2024-11-19 13:19:49.559939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.309 [2024-11-19 13:19:49.559982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.309 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.560239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.560271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.560398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.560430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.560617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.560650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.560776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.560808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.561009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.561043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.561170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.561203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.561323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.561356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.561463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.561502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.561610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.561641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.561819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.561852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.561990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.562023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.562197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.562229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.562344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.562377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.562484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.562516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.562760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.562793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.562991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.563024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.563142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.563174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.563292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.563324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.563492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.563527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.563696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.563728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.563835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.563868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.564014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.564047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.564331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.564364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.564484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.564516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.564631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.564664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.564841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.564873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.564993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.565028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.565216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.565249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.565421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.565454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.565579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.565611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.565714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.565747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.565860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.565893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.566042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.566075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.566257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.566290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.566425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.566458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.566699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.566733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.566918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.566956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.310 [2024-11-19 13:19:49.567135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.310 [2024-11-19 13:19:49.567167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.310 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.567420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.567452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.567557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.567590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.567777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.567809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.567934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.567977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.568150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.568182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.568464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.568497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.568676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.568709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.568842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.568875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.569048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.569082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.569205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.569243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.569508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.569541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.569648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.569681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.569877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.569908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.570037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.570071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.570242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.570274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.570441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.570475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.570584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.570616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.570816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.570849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.571094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.571128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.571248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.571280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.571467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.571501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.571688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.571720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.571966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.572000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.572177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.572209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.572391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.572424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.572552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.572584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.572785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.572818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.573062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.573096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.573220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.573253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.573433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.573464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.573586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.573618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.573728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.573759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.573876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.311 [2024-11-19 13:19:49.573908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.311 qpair failed and we were unable to recover it. 00:27:46.311 [2024-11-19 13:19:49.574040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.574072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.574172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.574204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.574441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.574473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.574598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.574631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.574819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.574851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.575037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.575070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.575186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.575218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.575332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.575365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.575546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.575578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.575766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.575798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.576038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.576071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.576184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.576215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.576415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.576448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.576706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.576738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.576928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.576985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.577229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.577262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.577509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.577546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.577735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.577768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.577876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.577909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.578031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.578063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.578236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.578268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.578396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.578428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.578684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.578716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.578930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.578974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.579161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.579192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.579378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.579411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.579532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.579564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.579669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.579702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.579912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.579944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.580078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.580111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.580235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.580267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.580452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.580484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.580601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.580633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.580810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.580843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.581028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.581061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.581235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.581267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.581505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.581538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.581650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.312 [2024-11-19 13:19:49.581682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.312 qpair failed and we were unable to recover it. 00:27:46.312 [2024-11-19 13:19:49.581864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.581897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.582085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.582118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.582303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.582336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.582525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.582557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.582766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.582798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.583018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.583053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.583230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.583263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.583381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.583413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.583652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.583684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.583808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.583840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.584048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.584080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.584307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.584339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.584531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.584564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.584745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.584777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.584959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.584994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.585268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.585301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.585474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.585506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.585714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.585746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.586005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.586044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.586283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.586315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.586507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.586538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.586658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.586690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.586887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.586919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.587125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.587157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.587331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.587363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.587553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.587584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.587697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.587730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.587944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.587985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.588159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.588191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.588369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.588401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.588591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.588623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.588744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.588775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.589071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.589104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.589291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.589323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.589459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.589491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.589727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.589760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.589877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.589909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.313 [2024-11-19 13:19:49.590109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.313 [2024-11-19 13:19:49.590141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.313 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.590392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.590424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.590555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.590587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.590824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.590856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.591085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.591119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.591237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.591270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.591444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.591477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.591759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.591792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.591905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.591937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.592187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.592221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.592351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.592383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.592669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.592702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.592832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.592865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.593044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.593077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.593208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.593241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.593436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.593469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.593663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.593695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.593879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.593912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.594116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.594150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.594321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.594352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.594456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.594489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.594679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.594717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.594831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.594864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.595034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.595067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.595251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.595285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.595456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.595488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.595602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.595634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.595747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.595779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.595883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.595916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.596106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.596139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.596321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.596354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.596614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.596646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.596826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.596859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.596977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.597010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.597251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.597285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.597481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.597513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.597643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.597676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.597915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.597956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.314 qpair failed and we were unable to recover it. 00:27:46.314 [2024-11-19 13:19:49.598141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.314 [2024-11-19 13:19:49.598174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.598270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.598302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.598475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.598508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.598678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.598711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.598817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.598849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.598980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.599013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.599251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.599284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.599521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.599553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.599728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.599760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.599967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.600000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.600125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.600157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.600256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.600288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.600481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.600513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.600700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.600732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.600922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.600961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.601091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.601123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.601248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.601280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.601465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.601498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.601696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.601727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.601839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.601872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.602061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.602093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.602297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.602329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.602436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.602469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.602607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.602644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.602828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.602861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.603035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.603069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.603186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.603219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.603346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.603378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.603504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.603537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.603664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.603696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.603805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.603838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.604078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.604111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.604221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.604252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.604507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.604540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.315 qpair failed and we were unable to recover it. 00:27:46.315 [2024-11-19 13:19:49.604728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.315 [2024-11-19 13:19:49.604761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.604876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.604909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.605034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.605067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.605260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.605293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.605398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.605430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.605568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.605600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.605737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.605770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.605958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.605993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.606254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.606287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.606465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.606497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.606615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.606647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.606830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.606862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.607032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.607066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.607238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.607269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.607382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.607414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.607588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.607620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.607867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.607900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.608016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.608049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.608170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.608202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.608311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.608343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.608466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.608499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.608704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.608737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.608850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.608882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.609004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.609038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.609302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.609334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.609465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.609497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.609670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.609703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.609944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.609985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.610100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.610132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.610372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.610404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.610523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.610556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.610687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.610719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.610970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.611004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.611126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.611159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.611327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.611358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.611485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.611517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.611623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.611656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.611765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.316 [2024-11-19 13:19:49.611797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.316 qpair failed and we were unable to recover it. 00:27:46.316 [2024-11-19 13:19:49.612008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.612042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.612148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.612181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.612415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.612447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.612657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.612690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.612810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.612842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.612964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.612997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.613174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.613206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.613323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.613355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.613524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.613556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.613753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.613786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.613966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.613999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.614195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.614228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.614340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.614372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.614479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.614511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.614632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.614664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.614835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.614867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.614991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.615025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.615307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.615339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.615527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.615565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.615669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.615702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.615870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.615903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.616032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.616067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.616254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.616286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.616406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.616438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.616624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.616655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.616761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.616794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.617034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.617068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.617171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.617203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.617375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.617407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.617517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.617550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.617724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.617756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.617994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.618027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.618175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.618207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.618315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.618347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.618464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.618497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.618678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.618710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.618822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.618855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.618978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.317 [2024-11-19 13:19:49.619011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.317 qpair failed and we were unable to recover it. 00:27:46.317 [2024-11-19 13:19:49.619242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.318 [2024-11-19 13:19:49.619274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.318 qpair failed and we were unable to recover it. 00:27:46.318 [2024-11-19 13:19:49.619459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.318 [2024-11-19 13:19:49.619492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.318 qpair failed and we were unable to recover it. 00:27:46.318 [2024-11-19 13:19:49.619604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.318 [2024-11-19 13:19:49.619636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.318 qpair failed and we were unable to recover it. 00:27:46.318 [2024-11-19 13:19:49.619804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.318 [2024-11-19 13:19:49.619836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.318 qpair failed and we were unable to recover it. 00:27:46.318 [2024-11-19 13:19:49.619964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.318 [2024-11-19 13:19:49.619998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.318 qpair failed and we were unable to recover it. 00:27:46.318 [2024-11-19 13:19:49.620181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.318 [2024-11-19 13:19:49.620214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.318 qpair failed and we were unable to recover it. 00:27:46.318 [2024-11-19 13:19:49.620335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.318 [2024-11-19 13:19:49.620366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.318 qpair failed and we were unable to recover it. 00:27:46.318 [2024-11-19 13:19:49.620505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.318 [2024-11-19 13:19:49.620538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.318 qpair failed and we were unable to recover it. 00:27:46.318 [2024-11-19 13:19:49.620732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.318 [2024-11-19 13:19:49.620765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.318 qpair failed and we were unable to recover it. 00:27:46.318 [2024-11-19 13:19:49.620939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.318 [2024-11-19 13:19:49.620986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.318 qpair failed and we were unable to recover it. 00:27:46.318 [2024-11-19 13:19:49.621188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.318 [2024-11-19 13:19:49.621220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.318 qpair failed and we were unable to recover it. 00:27:46.318 [2024-11-19 13:19:49.621337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.318 [2024-11-19 13:19:49.621371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.318 qpair failed and we were unable to recover it. 00:27:46.318 [2024-11-19 13:19:49.621523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.318 [2024-11-19 13:19:49.621554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.318 qpair failed and we were unable to recover it. 00:27:46.318 [2024-11-19 13:19:49.621759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.318 [2024-11-19 13:19:49.621792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.318 qpair failed and we were unable to recover it. 00:27:46.318 [2024-11-19 13:19:49.621904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.318 [2024-11-19 13:19:49.621937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.318 qpair failed and we were unable to recover it. 00:27:46.318 [2024-11-19 13:19:49.622190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.318 [2024-11-19 13:19:49.622222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.318 qpair failed and we were unable to recover it. 00:27:46.318 [2024-11-19 13:19:49.622344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.318 [2024-11-19 13:19:49.622377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.318 qpair failed and we were unable to recover it. 00:27:46.318 [2024-11-19 13:19:49.622575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.318 [2024-11-19 13:19:49.622608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.318 qpair failed and we were unable to recover it. 00:27:46.318 [2024-11-19 13:19:49.622783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.318 [2024-11-19 13:19:49.622815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.318 qpair failed and we were unable to recover it. 00:27:46.318 [2024-11-19 13:19:49.622931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.318 [2024-11-19 13:19:49.622984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.318 qpair failed and we were unable to recover it. 00:27:46.318 [2024-11-19 13:19:49.623249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.318 [2024-11-19 13:19:49.623286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.318 qpair failed and we were unable to recover it. 00:27:46.318 [2024-11-19 13:19:49.623400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.318 [2024-11-19 13:19:49.623433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.318 qpair failed and we were unable to recover it. 00:27:46.318 [2024-11-19 13:19:49.623633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.318 [2024-11-19 13:19:49.623665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.318 qpair failed and we were unable to recover it. 00:27:46.596 [2024-11-19 13:19:49.623845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.596 [2024-11-19 13:19:49.623878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.596 qpair failed and we were unable to recover it. 00:27:46.596 [2024-11-19 13:19:49.624049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.596 [2024-11-19 13:19:49.624084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.596 qpair failed and we were unable to recover it. 00:27:46.596 [2024-11-19 13:19:49.624229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.596 [2024-11-19 13:19:49.624262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.596 qpair failed and we were unable to recover it. 00:27:46.596 [2024-11-19 13:19:49.624380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.596 [2024-11-19 13:19:49.624412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.596 qpair failed and we were unable to recover it. 00:27:46.596 [2024-11-19 13:19:49.624522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.596 [2024-11-19 13:19:49.624555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.596 qpair failed and we were unable to recover it. 00:27:46.596 [2024-11-19 13:19:49.624812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.596 [2024-11-19 13:19:49.624844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.596 qpair failed and we were unable to recover it. 00:27:46.596 [2024-11-19 13:19:49.624965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.596 [2024-11-19 13:19:49.624998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.596 qpair failed and we were unable to recover it. 00:27:46.596 [2024-11-19 13:19:49.625236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.596 [2024-11-19 13:19:49.625268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.596 qpair failed and we were unable to recover it. 00:27:46.596 [2024-11-19 13:19:49.625371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.596 [2024-11-19 13:19:49.625404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.596 qpair failed and we were unable to recover it. 00:27:46.596 [2024-11-19 13:19:49.625587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.596 [2024-11-19 13:19:49.625619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.596 qpair failed and we were unable to recover it. 00:27:46.596 [2024-11-19 13:19:49.625794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.596 [2024-11-19 13:19:49.625827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.596 qpair failed and we were unable to recover it. 00:27:46.596 [2024-11-19 13:19:49.626014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.596 [2024-11-19 13:19:49.626047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.596 qpair failed and we were unable to recover it. 00:27:46.596 [2024-11-19 13:19:49.626170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.596 [2024-11-19 13:19:49.626203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.596 qpair failed and we were unable to recover it. 00:27:46.596 [2024-11-19 13:19:49.626373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.596 [2024-11-19 13:19:49.626405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.596 qpair failed and we were unable to recover it. 00:27:46.596 [2024-11-19 13:19:49.626642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.596 [2024-11-19 13:19:49.626674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.596 qpair failed and we were unable to recover it. 00:27:46.596 [2024-11-19 13:19:49.626795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.596 [2024-11-19 13:19:49.626828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.596 qpair failed and we were unable to recover it. 00:27:46.596 [2024-11-19 13:19:49.626998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.596 [2024-11-19 13:19:49.627031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.596 qpair failed and we were unable to recover it. 00:27:46.596 [2024-11-19 13:19:49.627165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.596 [2024-11-19 13:19:49.627198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.596 qpair failed and we were unable to recover it. 00:27:46.596 [2024-11-19 13:19:49.627371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.596 [2024-11-19 13:19:49.627404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.596 qpair failed and we were unable to recover it. 00:27:46.596 [2024-11-19 13:19:49.627529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.596 [2024-11-19 13:19:49.627561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.596 qpair failed and we were unable to recover it. 00:27:46.596 [2024-11-19 13:19:49.627680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.627713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.627906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.627938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.628162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.628195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.628385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.628417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.628621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.628654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.628767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.628799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.629056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.629089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.629198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.629230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.629352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.629384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.629556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.629589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.629711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.629743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.629913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.629946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.630068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.630101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.630303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.630335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.630446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.630477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.630605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.630639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.630877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.630909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.631142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.631182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.631350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.631382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.631517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.631549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.631721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.631753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.631994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.632027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.632296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.632330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.632512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.632544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.632737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.632770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.632908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.632940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.633129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.633162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.633334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.633367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.633501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.633534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.633721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.633753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.633926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.633969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.634110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.634143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.634407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.634439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.634554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.634586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.634707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.634740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.634971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.635004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.635181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.635213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.597 qpair failed and we were unable to recover it. 00:27:46.597 [2024-11-19 13:19:49.635393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.597 [2024-11-19 13:19:49.635425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.635688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.635721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.635891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.635924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.636069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.636103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.636274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.636306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.636492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.636524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.636808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.636840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.636980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.637015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.637196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.637229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.637413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.637446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.637585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.637618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.637808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.637840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.637961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.637994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.638126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.638158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.638279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.638311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.638420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.638453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.638638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.638671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.638846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.638877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.639067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.639100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.639293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.639326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.639526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.639564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.639779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.639813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.640040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.640073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.640225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.640256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.640430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.640462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.640704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.640736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.640857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.640889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.641088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.641121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.641359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.641392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.641591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.641622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.641884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.641917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.642111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.642142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.642273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.642306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.642502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.642534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.642722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.642755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.642945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.643000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.643173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.643206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.643411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.598 [2024-11-19 13:19:49.643443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.598 qpair failed and we were unable to recover it. 00:27:46.598 [2024-11-19 13:19:49.643626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.643659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.643771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.643802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.643978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.644012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.644265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.644299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.644496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.644527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.644764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.644797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.644987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.645019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.645201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.645233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.645421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.645454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.645589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.645622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.645734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.645767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.645887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.645919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.646060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.646094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.646263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.646296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.646571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.646603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.646780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.646812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.647013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.647047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.647166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.647199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.647368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.647400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.647687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.647720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.647908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.647939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.648205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.648238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.648343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.648381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.648626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.648659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.648866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.648898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.649150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.649184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.649423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.649455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.649644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.649676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.649848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.649880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.650067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.650101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.650309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.650342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.650458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.650490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.650626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.650658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.650894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.650927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.651212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.651245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.651376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.651409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.651590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.651621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.651744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.599 [2024-11-19 13:19:49.651777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.599 qpair failed and we were unable to recover it. 00:27:46.599 [2024-11-19 13:19:49.651958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.651993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.652170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.652203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.652357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.652389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.652520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.652553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.652660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.652692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.652981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.653014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.653201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.653233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.653409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.653440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.653547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.653580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.653770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.653803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.653944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.653987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.654241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.654278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.654456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.654489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.654675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.654708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.654825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.654858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.655120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.655154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.655287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.655320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.655558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.655590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.655777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.655810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.655985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.656018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.656218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.656251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.656421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.656454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.656574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.656607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.656840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.656873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.656991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.657031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.657269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.657301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.657432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.657465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.657645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.657677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.657790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.657822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.658101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.658135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.658266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.658298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.658409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.658442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.658618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.658650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.658846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.658878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.658994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.659028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.600 [2024-11-19 13:19:49.659218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.600 [2024-11-19 13:19:49.659250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.600 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.659366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.659399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.659586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.659618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.659751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.659784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.659970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.660003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.660264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.660296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.660482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.660515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.660619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.660651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.660902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.660935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.661125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.661157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.661340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.661373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.661480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.661512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.661721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.661753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.661955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.661989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.662130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.662163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.662343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.662375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.662565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.662598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.662770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.662802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.662906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.662939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.663132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.663165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.663281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.663315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.663573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.663606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.663797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.663830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.664038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.664072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.664312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.664345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.664523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.664555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.664771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.664804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.664944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.664985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.665164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.665198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.665388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.665426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.665595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.665628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.665824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.665856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.666051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.666084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.666217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.666249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.666445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.666478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.601 [2024-11-19 13:19:49.666690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.601 [2024-11-19 13:19:49.666722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.601 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.666891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.666923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.667212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.667246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.667434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.667466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.667701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.667733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.667850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.667882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.668062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.668096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.668221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.668255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.668467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.668499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.668602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.668634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.668845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.668876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.669081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.669115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.669304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.669336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.669521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.669554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.669672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.669704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.669940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.669982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.670268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.670300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.670494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.670527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.670648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.670681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.670868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.670901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.671042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.671075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.671200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.671233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.671435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.671467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.671660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.671693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.671813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.671846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.672099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.672132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.672268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.672300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.672494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.672528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.672725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.672756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.672939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.672982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.673109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.673141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.673278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.673311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.673443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.673476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.673734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.673767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.673960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.674000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.674195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.674228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.674405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.674438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.674612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.602 [2024-11-19 13:19:49.674645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.602 qpair failed and we were unable to recover it. 00:27:46.602 [2024-11-19 13:19:49.674816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.674848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.675022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.675056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.675176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.675210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.675393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.675425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.675552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.675584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.675766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.675799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.675992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.676026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.676289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.676322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.676448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.676480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.676671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.676704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.676908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.676940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.677097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.677130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.677307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.677340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.677518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.677551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.677660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.677692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.677896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.677930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.678116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.678149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.678364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.678399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.678517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.678550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.678740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.678772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.679013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.679046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.679296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.679329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.679457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.679489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.679686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.679720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.679826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.679858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.679978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.680012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.680129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.680162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.680275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.680307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.680501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.680533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.680717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.680749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.680869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.680902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.681086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.681118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.681327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.681360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.681550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.681582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.681716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.681749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.681990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.682024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.682142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.682179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.603 [2024-11-19 13:19:49.682420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.603 [2024-11-19 13:19:49.682453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.603 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.682561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.682593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.682789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.682822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.682994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.683027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.683201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.683233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.683353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.683386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.683567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.683599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.683785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.683819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.683965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.684001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.684189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.684221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.684337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.684370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.684500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.684533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.684741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.684773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.684891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.684924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.685114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.685147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.685277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.685310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.685480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.685512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.685702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.685735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.685913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.685944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.686075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.686108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.686353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.686385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.686553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.686586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.686759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.686791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.686918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.686960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.687080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.687112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.687235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.687268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.687512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.687544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.687664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.687697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.687879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.687911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.688098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.688132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.688329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.688361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.688598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.688630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.688817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.688848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.688976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.689010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.689184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.689216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.689320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.689353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.689596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.689628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.689754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.689787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.604 [2024-11-19 13:19:49.689982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.604 [2024-11-19 13:19:49.690015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.604 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.690266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.690303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.690434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.690467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.690642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.690674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.690927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.690990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.691122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.691155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.691280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.691312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.691422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.691454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.691760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.691793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.691971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.692005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.692180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.692212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.692399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.692431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.692605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.692638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.692875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.692908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.693039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.693074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.693198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.693230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.693341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.693373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.693481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.693514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.693635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.693668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.693871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.693904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.694111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.694146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.694337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.694370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.694562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.694595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.694717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.694750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.694994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.695027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.695141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.695174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.695391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.695423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.695596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.695629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.695777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.695810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.695933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.695972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.696109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.696142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.696315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.696348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.696521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.696553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.696679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.605 [2024-11-19 13:19:49.696712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.605 qpair failed and we were unable to recover it. 00:27:46.605 [2024-11-19 13:19:49.696829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.696860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.697030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.697063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.697247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.697280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.697469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.697502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.697627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.697660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.697843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.697875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.698103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.698137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.698265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.698303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.698564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.698596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.698773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.698805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.699074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.699108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.699350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.699382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.699502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.699535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.699704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.699736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.699869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.699903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.700147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.700180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.700422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.700455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.700716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.700748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.700879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.700912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.701126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.701159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.701386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.701420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.701603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.701636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.701815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.701848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.701970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.702004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.702202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.702235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.702424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.702456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.702580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.702613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.702720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.702752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.702924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.702966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.703097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.703130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.703266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.703298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.703432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.703465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.703579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.703611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.703790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.703822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.704090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.704124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.704387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.704420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.704601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.704633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.704829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.606 [2024-11-19 13:19:49.704862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.606 qpair failed and we were unable to recover it. 00:27:46.606 [2024-11-19 13:19:49.705133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.705166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.705407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.705439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.705702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.705735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.705845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.705878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.706082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.706117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.706389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.706422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.706608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.706641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.706890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.706922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.707123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.707157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.707346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.707384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.707555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.707588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.707775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.707808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.708026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.708059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.708247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.708280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.708449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.708481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.708655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.708687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.708971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.709003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.709265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.709297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.709488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.709519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.709704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.709737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.709939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.709980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.710107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.710139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.710271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.710304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.710546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.710580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.710698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.710731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.710924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.710963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.711136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.711168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.711351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.711383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.711583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.711615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.711814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.711846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.712086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.712119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.712303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.712336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.712478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.712510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.712636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.712668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.712856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.712888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.713105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.713138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.713327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.713360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.713624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.607 [2024-11-19 13:19:49.713658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.607 qpair failed and we were unable to recover it. 00:27:46.607 [2024-11-19 13:19:49.713869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.713901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.714097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.714131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.714332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.714364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.714539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.714571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.714758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.714790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.714964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.714998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.715174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.715206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.715381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.715414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.715534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.715566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.715695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.715729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.715927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.715968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.716205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.716237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.716438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.716471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.716662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.716695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.716811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.716842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.716994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.717028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.717267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.717299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.717488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.717520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.717634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.717667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.717851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.717884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.718145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.718178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.718359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.718392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.718505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.718538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.718645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.718678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.718860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.718893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.719080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.719113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.719218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.719251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.719428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.719460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.719627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.719661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.719849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.719882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.720123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.720157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.720370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.720402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.720509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.720542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.720733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.720765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.720939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.720983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.721108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.721141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.721257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.721290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.721479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.608 [2024-11-19 13:19:49.721512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.608 qpair failed and we were unable to recover it. 00:27:46.608 [2024-11-19 13:19:49.721683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.721726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.721898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.721931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.722203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.722236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.722374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.722407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.722588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.722622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.722743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.722775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.723011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.723045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.723168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.723201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.723463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.723494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.723609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.723642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.723880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.723913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.724037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.724072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.724271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.724303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.724439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.724472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.724668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.724700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.724882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.724916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.725118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.725151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.725320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.725354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.725463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.725495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.725734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.725767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.726015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.726049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.726174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.726207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.726382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.726415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.726631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.726664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.726765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.726798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.726970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.727004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.727126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.727159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.727299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.727332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.727461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.727495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.727662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.727695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.727959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.727992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.728236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.728269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.728452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.728484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.728671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.728704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.728876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.728909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.729109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.729142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.729330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.729363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.729602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.609 [2024-11-19 13:19:49.729634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.609 qpair failed and we were unable to recover it. 00:27:46.609 [2024-11-19 13:19:49.729844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.729876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.730067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.730100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.730217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.730255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.730375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.730407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.730611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.730644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.730816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.730849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.731054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.731087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.731284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.731317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.731561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.731594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.731778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.731810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.731927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.731966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.732140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.732173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.732357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.732390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.732525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.732558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.732675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.732707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.732878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.732911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.733101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.733134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.733306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.733339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.733526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.733558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.733735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.733768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.733946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.733986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.734183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.734214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.734402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.734434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.734609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.734642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.734830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.734863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.735114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.735148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.735338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.735370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.735606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.735639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.735848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.735881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.736106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.736139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.736332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.736365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.736570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.736602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.736784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.736817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.737080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.610 [2024-11-19 13:19:49.737113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.610 qpair failed and we were unable to recover it. 00:27:46.610 [2024-11-19 13:19:49.737249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.737282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.737568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.737601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.737808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.737841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.738018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.738052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.738168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.738200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.738333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.738366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.738535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.738568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.738806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.738839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.739048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.739087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.739340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.739373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.739596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.739629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.739758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.739791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.739987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.740019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.740206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.740238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.740444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.740476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.740665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.740698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.740974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.741007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.741142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.741175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.741298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.741331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.741506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.741538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.741795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.741827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.742017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.742051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.742245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.742278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.742398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.742429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.742619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.742651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.742767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.742800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.743063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.743097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.743231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.743264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.743371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.743404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.743524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.743556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.743691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.743723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.743966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.743999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.744129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.744161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.744331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.744364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.744644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.744676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.744852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.744886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.745024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.745057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.745252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.611 [2024-11-19 13:19:49.745286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.611 qpair failed and we were unable to recover it. 00:27:46.611 [2024-11-19 13:19:49.745410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.745442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.745567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.745599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.745718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.745751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.745928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.745971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.746232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.746264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.746450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.746483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.746666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.746698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.746879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.746911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.747158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.747191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.747366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.747399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.747533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.747571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.747684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.747717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.747918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.747960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.748140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.748173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.748363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.748395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.748585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.748619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.748788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.748819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.748991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.749025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.749263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.749296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.749417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.749449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.749633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.749666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.749960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.749994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.750132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.750165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.750336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.750368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.750554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.750587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.750782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.750814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.751051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.751084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.751195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.751227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.751399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.751431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.751631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.751664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.751845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.751877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.752043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.752076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.752315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.752348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.752586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.752617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.752805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.752838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.752995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.753029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.753149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.753181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.612 [2024-11-19 13:19:49.753362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.612 [2024-11-19 13:19:49.753396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.612 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.753655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.753687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.753796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.753828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.753943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.753984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.754097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.754130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.754324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.754356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.754539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.754571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.754672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.754704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.754889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.754923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.755065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.755097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.755291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.755324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.755445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.755477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.755671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.755704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.755966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.756005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.756194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.756228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.756347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.756379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.756502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.756535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.756719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.756752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.756859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.756891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.757075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.757108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.757285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.757318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.757580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.757612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.757780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.757813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.758091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.758125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.758254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.758287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.758473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.758506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.758693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.758725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.759016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.759050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.759173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.759205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.759375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.759408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.759575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.759608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.759803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.759836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.760027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.760060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.760241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.760273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.760454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.760487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.760722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.760755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.760994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.761026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.761149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.761181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.613 qpair failed and we were unable to recover it. 00:27:46.613 [2024-11-19 13:19:49.761356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.613 [2024-11-19 13:19:49.761389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.761652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.761683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.761881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.761914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.762163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.762197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.762383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.762416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.762591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.762623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.762757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.762790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.762972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.763006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.763129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.763161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.763345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.763378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.763549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.763582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.763692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.763725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.763927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.763969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.764200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.764232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.764498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.764532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.764714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.764756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.764876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.764910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.765107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.765140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.765378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.765411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.765580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.765613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.765796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.765829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.765940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.765982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.766240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.766274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.766448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.766480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.766718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.766751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.766867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.766899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.767109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.767143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.767388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.767420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.767536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.767569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.767747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.767780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.768065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.768098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.768274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.768307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.768428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.768460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.768642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.768675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.614 qpair failed and we were unable to recover it. 00:27:46.614 [2024-11-19 13:19:49.768787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.614 [2024-11-19 13:19:49.768820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.769004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.769038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.769171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.769204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.769399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.769431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.769674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.769707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.769878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.769911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.770160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.770195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.770371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.770404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.770669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.770703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.770804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.770837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.771006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.771040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.771209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.771242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.771424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.771456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.771693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.771725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.771933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.771977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.772159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.772191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.772390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.772423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.772543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.772576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.772759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.772791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.773033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.773066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.773172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.773204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.773398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.773435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.773674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.773708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.773892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.773924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.774184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.774217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.774408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.774441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.774567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.774599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.774869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.774901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.775032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.775065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.775237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.775269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.775442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.775475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.775610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.775643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.775831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.775864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.775973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.776005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.776243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.776275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.776492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.776524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.776648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.776680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.776808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.615 [2024-11-19 13:19:49.776842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.615 qpair failed and we were unable to recover it. 00:27:46.615 [2024-11-19 13:19:49.777020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.777055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.777235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.777267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.777382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.777414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.777583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.777614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.777738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.777771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.777965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.778000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.778170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.778203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.778413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.778446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.778623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.778656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.778848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.778881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.779099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.779133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.779325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.779359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.779469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.779501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.779738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.779771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.779963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.779997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.780260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.780292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.780507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.780540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.780658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.780690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.780867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.780898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.781148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.781181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.781364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.781396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.781587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.781620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.781822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.781854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.782054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.782093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.782221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.782253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.782444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.782476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.782604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.782637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.782825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.782857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.782983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.783015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.783201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.783233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.783418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.783451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.783690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.783723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.783852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.783884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.784066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.784100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.784306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.784338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.784603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.784636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.784819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.784851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.784979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.616 [2024-11-19 13:19:49.785013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.616 qpair failed and we were unable to recover it. 00:27:46.616 [2024-11-19 13:19:49.785136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.785169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.785353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.785386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.785557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.785590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.785783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.785815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.786044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.786078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.786200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.786233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.786417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.786449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.786622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.786655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.786864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.786897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.787142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.787176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.787349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.787381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.787553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.787585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.787847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.787879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.788075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.788108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.788376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.788408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.788622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.788655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.788830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.788862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.788984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.789017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.789189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.789222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.789463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.789494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.789694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.789726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.789984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.790017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.790149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.790182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.790379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.790410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.790536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.790571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.790809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.790846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.791055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.791089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.791263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.791296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.791482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.791515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.791697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.791729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.791838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.791869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.792135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.792168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.792374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.792407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.792590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.792622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.792746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.792779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.792888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.792920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.793083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.793116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.617 [2024-11-19 13:19:49.793327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.617 [2024-11-19 13:19:49.793357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.617 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.793610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.793642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.793830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.793863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.794032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.794066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.794198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.794230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.794401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.794433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.794628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.794660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.794831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.794863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.794985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.795018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.795188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.795220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.795338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.795370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.795570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.795603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.795774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.795805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.796042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.796076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.796204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.796236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.796482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.796514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.796688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.796720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.796907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.796939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.797140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.797174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.797346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.797379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.797554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.797586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.797821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.797853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.798032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.798065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.798242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.798275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.798449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.798481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.798649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.798682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.798879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.798910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.799028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.799062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.799264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.799302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.799503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.799536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.799654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.799685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.799966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.799999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.800118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.800150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.800270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.800304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.800414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.800446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.800617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.800649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.800894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.800927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.801063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.801096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.618 [2024-11-19 13:19:49.801295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.618 [2024-11-19 13:19:49.801327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.618 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.801446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.801478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.801653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.801687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.801868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.801899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.802139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.802172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.802346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.802378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.802490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.802524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.802634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.802666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.802784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.802816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.802999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.803033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.803291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.803324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.803498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.803530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.803792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.803825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.804030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.804063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.804258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.804291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.804512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.804546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.804731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.804762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.804895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.804928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.805061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.805094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.805280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.805313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.805579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.805611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.805782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.805815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.805938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.805980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.806098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.806130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.806366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.806399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.806582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.806615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.806878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.806909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.807198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.807232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.807337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.807369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.807637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.807669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.807787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.807830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.808038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.808072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.808193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.808225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.808415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.808447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.808582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.619 [2024-11-19 13:19:49.808615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.619 qpair failed and we were unable to recover it. 00:27:46.619 [2024-11-19 13:19:49.808877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.808909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.809092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.809125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.809311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.809344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.809547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.809580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.809693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.809725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.809903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.809937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.810221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.810253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.810373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.810405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.810530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.810562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.810847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.810880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.811001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.811034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.811220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.811252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.811432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.811464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.811670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.811703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.811944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.811984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.812116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.812148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.812323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.812355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.812549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.812582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.812764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.812796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.812998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.813032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.813249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.813282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.813462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.813494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.813675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.813709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.813827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.813859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.814054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.814088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.814330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.814363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.814474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.814506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.814674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.814707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.814967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.815000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.815123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.815155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.815414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.815446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.815572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.815604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.815841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.815873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.815987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.816019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.816148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.816180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.816367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.816406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.816595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.816627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.816803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.620 [2024-11-19 13:19:49.816835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.620 qpair failed and we were unable to recover it. 00:27:46.620 [2024-11-19 13:19:49.817082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.817116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.817352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.817386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.817624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.817655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.817769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.817801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.817985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.818019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.818248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.818282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.818455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.818488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.818619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.818652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.818912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.818944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.819058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.819091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.819212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.819244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.819424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.819458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.819650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.819682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.819869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.819902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.820101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.820135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.820320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.820352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.820526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.820558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.820673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.820706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.820882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.820914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.821051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.821085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.821255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.821286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.821462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.821494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.821596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.821629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.821803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.821835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.821966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.822000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.822239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.822270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.822536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.822567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.822739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.822771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.822887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.822920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.823115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.823146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.823345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.823377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.823513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.823545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.823683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.823715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.823899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.823931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.824134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.824167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.824294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.824326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.824503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.824536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.621 [2024-11-19 13:19:49.824661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.621 [2024-11-19 13:19:49.824699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.621 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.824902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.824934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.825240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.825272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.825395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.825427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.825600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.825633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.825880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.825913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.826108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.826141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.826375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.826408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.826577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.826608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.826782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.826813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.826995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.827028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.827207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.827239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.827408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.827440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.827559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.827592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.827771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.827803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.827983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.828016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.828123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.828156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.828351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.828383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.828588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.828620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.828879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.828912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.829040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.829073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.829310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.829342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.829582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.829614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.829811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.829842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.830032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.830066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.830200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.830233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.830415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.830447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.830628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.830666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.830784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.830815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.831000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.831033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.831293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.831325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.831510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.831543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.831727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.831759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.832007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.832040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.832298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.832331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.832501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.832534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.832773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.832805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.833045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.833079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.622 qpair failed and we were unable to recover it. 00:27:46.622 [2024-11-19 13:19:49.833251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.622 [2024-11-19 13:19:49.833283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.833473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.833504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.833690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.833723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.833907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.833939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.834064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.834098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.834278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.834309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.834483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.834516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.834645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.834677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.834913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.834945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.835214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.835247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.835367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.835398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.835659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.835692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.835817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.835848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.835969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.836003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.836243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.836275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.836408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.836441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.836629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.836661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.836882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.836914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.837043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.837075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.837250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.837282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.837399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.837431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.837552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.837584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.837789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.837821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.837991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.838026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.838155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.838187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.838325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.838358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.838497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.838530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.838719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.838751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.839002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.839036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.839148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.839186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.839397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.839429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.839547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.839579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.839839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.839870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.840058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.840091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.623 [2024-11-19 13:19:49.840265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.623 [2024-11-19 13:19:49.840297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.623 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.840468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.840500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.840605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.840638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.840820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.840851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.841109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.841142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.841313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.841346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.841459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.841491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.841671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.841703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.841969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.842001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.842112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.842145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.842266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.842299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.842569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.842602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.842739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.842772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.843034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.843065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.843329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.843361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.843627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.843660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.843840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.843872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.844108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.844142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.844354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.844387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.844582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.844614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.844800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.844832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.845016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.845049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.845252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.845284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.845417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.845449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.845569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.845601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.845714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.845747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.845869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.845901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.846147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.846180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.846373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.846405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.846535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.846567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.846741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.846773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.847009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.847042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.847302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.847334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.847448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.847480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.847615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.847648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.847753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.847791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.847979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.848013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.848185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.624 [2024-11-19 13:19:49.848218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.624 qpair failed and we were unable to recover it. 00:27:46.624 [2024-11-19 13:19:49.848341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.848373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.848546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.848580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.848684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.848716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.848821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.848854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.849062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.849094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.849263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.849294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.849470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.849502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.849624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.849656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.849837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.849869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.850107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.850140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.850328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.850360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.850489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.850522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.850793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.850825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.850993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.851026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.851264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.851296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.851480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.851512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.851719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.851752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.851873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.851905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.852098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.852131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.852309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.852342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.852586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.852617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.852735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.852767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.852962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.852995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.853257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.853289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.853495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.853527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.853726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.853757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.853876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.853908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.854123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.854158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.854288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.854319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.854437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.854469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.854642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.854675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.854943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.854984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.855168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.855200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.855387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.855420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.855684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.855716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.855912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.855944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.856125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.856157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.625 [2024-11-19 13:19:49.856341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.625 [2024-11-19 13:19:49.856378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.625 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.856513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.856546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.856658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.856691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.856819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.856851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.857027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.857061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.857249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.857281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.857422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.857454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.857625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.857658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.857841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.857873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.858052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.858085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.858274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.858306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.858562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.858595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.858710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.858742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.858929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.858971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.859158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.859192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.859308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.859340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.859470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.859502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.859679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.859712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.859956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.859989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.860103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.860136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.860275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.860307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.860490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.860523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.860753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.860785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.860969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.861003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.861265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.861297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.861422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.861454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.861704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.861736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.861874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.861907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.862172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.862204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.862445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.862477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.862658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.862689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.862886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.862919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.863044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.863075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.863243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.863276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.863450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.863482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.863652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.863684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.863893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.863925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.864037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.626 [2024-11-19 13:19:49.864070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.626 qpair failed and we were unable to recover it. 00:27:46.626 [2024-11-19 13:19:49.864189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.864220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.864340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.864372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.864540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.864578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.864685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.864718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.864901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.864933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.865067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.865100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.865268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.865300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.865491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.865523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.865715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.865747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.865984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.866018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.866201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.866234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.866507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.866539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.866731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.866763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.867000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.867033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.867219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.867251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.867434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.867466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.867724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.867756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.867967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.868000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.868256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.868288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.868473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.868506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.868630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.868661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.868797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.868830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.868960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.868994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.869123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.869155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.869272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.869304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.869555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.869588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.869706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.869738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.869980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.870014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.870306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.870339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.870612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.870644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.870880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.870912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.871127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.871160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.871348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.871380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.871564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.871596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.871705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.871738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.871925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.871983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.872171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.872202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.627 [2024-11-19 13:19:49.872362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.627 [2024-11-19 13:19:49.872394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.627 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.872509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.872542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.872806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.872839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.873018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.873051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.873276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.873309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.873498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.873536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.873804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.873837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.873967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.874000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.874132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.874165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.874352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.874384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.874710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.874742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.874921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.874962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.875091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.875122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.875296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.875327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.875591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.875623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.875794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.875826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.875997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.876030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.876225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.876257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.876427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.876459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.876592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.876626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.876758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.876790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.876982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.877015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.877140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.877172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.877291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.877323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.877615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.877648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.877799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.877830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.878091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.878123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.878243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.878276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.878532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.878564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.878668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.878701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.878962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.878996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.879240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.879272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.879408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.879440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.879647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.879680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.879874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.879907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.880159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.880194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.880400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.880432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.880614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.880647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.880830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.880862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.881045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.881079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.628 [2024-11-19 13:19:49.881268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.628 [2024-11-19 13:19:49.881301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.628 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.881430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.881462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.881727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.881760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.881941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.881982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.882223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.882256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.882518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.882557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.882732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.882765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.882891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.882923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.883102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.883134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.883257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.883289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.883552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.883584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.883822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.883854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.884117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.884150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.884271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.884303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.884510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.884542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.884649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.884681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.884800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.884832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.885035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.885068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.885283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.885317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.885588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.885620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.885902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.885934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.886126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.886159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.886328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.886360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.886546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.886577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.886856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.886888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.887190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.887223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.887427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.887460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.887663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.887695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.887884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.887917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.888193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.888226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.888360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.888394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.888567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.888599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.888841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.888875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.889067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.889100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.889366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.889398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.889682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.889714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.889991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.890024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.890257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.890290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.890551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.890583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.890822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.890855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.891126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.891160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.891363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.891395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.891665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.891698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.891885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.891918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.892133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.892167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.892403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.892441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.892709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.892742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.892929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.892987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.893247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.893279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.893510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.893543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.893779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.893812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.894062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.894095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.894332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.629 [2024-11-19 13:19:49.894365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.629 qpair failed and we were unable to recover it. 00:27:46.629 [2024-11-19 13:19:49.894537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.894570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.894836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.894868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.894992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.895026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.895202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.895234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.895431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.895463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.895661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.895693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.895942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.895982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.896222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.896254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.896430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.896462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.896681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.896713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.896976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.897008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.897258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.897290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.897481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.897513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.897723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.897756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.898019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.898053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.898243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.898276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.898463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.898496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.898687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.898720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.898892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.898924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.899219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.899253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.899384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.899416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.899593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.899626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.899888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.899921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.900110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.900141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.900255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.900288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.900477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.900520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.900695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.900728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.900927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.900979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.901156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.901188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.901461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.901493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.901760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.901792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.901931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.901974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.902221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.902260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.902543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.902575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.902762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.902795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.903041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.903076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.903260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.903293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.903551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.903583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.903755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.903788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.903977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.904010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.904255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.904288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.904412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.904445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.904682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.904714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.905014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.905049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.905323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.905356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.905579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.905611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.905864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.905898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.906148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.906181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.906387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.906421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.906608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.906641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.906903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.906935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.907192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.907225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.907511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.907544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.630 qpair failed and we were unable to recover it. 00:27:46.630 [2024-11-19 13:19:49.907816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.630 [2024-11-19 13:19:49.907847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.907975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.908009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.908251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.908284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.908567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.908599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.908783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.908815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.909005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.909038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.909237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.909270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.909513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.909545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.909805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.909837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.910029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.910063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.910187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.910219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.910402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.910434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.910649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.910682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.910966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.911000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.911270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.911302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.911569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.911602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.911892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.911925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.912059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.912091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.912279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.912313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.912527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.912566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.912873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.912906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.913194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.913227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.913472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.913505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.913619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.913651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.913849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.913881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.914143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.914178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.914416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.914449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.914638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.914672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.914794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.914826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.915018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.915052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.915300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.915334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.915594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.915627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.915866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.915898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.916052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.916086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.916325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.916359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.916555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.916587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.916778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.916811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.917071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.917104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.917389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.917421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.917617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.917650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.917889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.917921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.918120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.918154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.918334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.918367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.918628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.918660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.918837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.918870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.919112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.631 [2024-11-19 13:19:49.919146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.631 qpair failed and we were unable to recover it. 00:27:46.631 [2024-11-19 13:19:49.919355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.919388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.919652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.919685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.919895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.919929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.920121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.920153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.920343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.920386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.920648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.920680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.920895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.920929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.921117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.921149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.921332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.921364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.921622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.921655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.921942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.921984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.922156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.922189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.922373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.922404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.922620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.922660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.922868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.922900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.923151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.923185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.923440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.923473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.923720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.923753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.923971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.924006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.924190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.924224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.924347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.924380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.924643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.924676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.924915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.924973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.925267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.925301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.925547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.925580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.925773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.925806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.926078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.926112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.926380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.926413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.926592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.926625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.926863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.926895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.927094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.927128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.927397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.927428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.927683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.927716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.927907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.927939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.928141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.928174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.928291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.928324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.928562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.928594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.928780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.632 [2024-11-19 13:19:49.928814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.632 qpair failed and we were unable to recover it. 00:27:46.632 [2024-11-19 13:19:49.929054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.929088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.929217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.929250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.929517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.929550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.929840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.929873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.930119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.930152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.930353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.930385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.930647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.930679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.930922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.930961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.931144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.931177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.931369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.931403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.931578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.931609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.931883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.931916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.932188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.932221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.932337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.932369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.932607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.932639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.932835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.932874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.932994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.933051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.933318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.933351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.933613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.933645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.933895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.933928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.934150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.934182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.934372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.934404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.934667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.934701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.934898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.934931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.935183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.935217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.935479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.935511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.935695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.935728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.935956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.935991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.936209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.936242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.936509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.936542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.936738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.936771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.936968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.937003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.937265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.937298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.937537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.937569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.937842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.937875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.938055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.938088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.938208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.938241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.938496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.938528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.938770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.938803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.939052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.939086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.939335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.939368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.939492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.939525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.939799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.939832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.940008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.940042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.940307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.940340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.940580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.940613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.940883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.940917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.941142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.941175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.633 [2024-11-19 13:19:49.941421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.633 [2024-11-19 13:19:49.941454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.633 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.941740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.941772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.942046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.942081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.942210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.942244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.942513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.942545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.942737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.942770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.942967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.943001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.943242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.943282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.943573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.943606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.943895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.943928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.944117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.944151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.944346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.944380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.944576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.944609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.944793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.944828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.944968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.945003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.945246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.945279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.945456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.945490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.945783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.945815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.946009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.946043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.946250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.946283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.946580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.946613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.946874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.946907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.947208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.947241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.947441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.947474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.947754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.947786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.947971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.948005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.948276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.948310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.948496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.948529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.948790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.948823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.949042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.949076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.949256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.949289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.949573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.949605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.949866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.949899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.950192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.950227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.950418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.950456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.950720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.950753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.950944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.950988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.951241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.951272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.951558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.951591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.951806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.951840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.634 [2024-11-19 13:19:49.952112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.634 [2024-11-19 13:19:49.952146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.634 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.952387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.952423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.952687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.952722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.952967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.953001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.953247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.953280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.953596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.953630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.953821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.953854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.954051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.954086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.954269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.954303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.954502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.954535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.954794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.954828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.955039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.955073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.955264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.955298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.955540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.955574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.955775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.955807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.956080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.956114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.956359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.956393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.956612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.956645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.956840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.956873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.957062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.957097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.957368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.957402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.957584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.957618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.957838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.957870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.958142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.958175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.958434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.958468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.958718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.958750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.958938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.958981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.959193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.959226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.959474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.959507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.906 [2024-11-19 13:19:49.959708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.906 [2024-11-19 13:19:49.959741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.906 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.960017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.960052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.960229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.960262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.960385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.960417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.960699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.960733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.960995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.961035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.961241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.961275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.961452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.961486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.961663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.961696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.961993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.962027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.962271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.962304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.962571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.962605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.962812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.962845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.963088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.963122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.963410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.963444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.963639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.963672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.963849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.963883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.964060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.964093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.964312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.964346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.964663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.964696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.964910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.964944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.965241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.965275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.965407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.965441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.965714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.965747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.966040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.966093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.966242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.966275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.966407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.966440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.966683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.966715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.966887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.966920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.967178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.967211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.967422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.967456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.967644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.967677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.967865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.967899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.968154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.968188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.968458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.968491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.968786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.968819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.969084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.969119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.969410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.969443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.969568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.969601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.969894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.969927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.970217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.970250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.970469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.970503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.970746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.907 [2024-11-19 13:19:49.970779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.907 qpair failed and we were unable to recover it. 00:27:46.907 [2024-11-19 13:19:49.970985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.971019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.971130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.971164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.971444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.971483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.971607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.971641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.971886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.971919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.972202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.972236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.972446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.972480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.972727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.972760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.972975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.973009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.973228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.973261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.973403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.973437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.973639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.973672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.973854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.973887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.974077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.974112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.974251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.974285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.974474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.974506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.974784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.974818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.975098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.975132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.975352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.975386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.975655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.975688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.975925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.975967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.976161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.976194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.976392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.976426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.976693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.976726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.976936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.976977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.977225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.977259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.977525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.977558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.977831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.977864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.978145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.978179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.978301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.978336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.978525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.978558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.978682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.978714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.978837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.978871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.979049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.979084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.979274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.979307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.979524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.979556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.979758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.979790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.980040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.980075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.980371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.980403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.980666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.980699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.980999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.981035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.981311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.981345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.981620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.981659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.981941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.982000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.982277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.982311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.982567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.982602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.982718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.982751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.983017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.983052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.983328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.983362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.983643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.983675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.908 [2024-11-19 13:19:49.983962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.908 [2024-11-19 13:19:49.983997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.908 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.984194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.984227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.984527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.984560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.984740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.984775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.985068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.985102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.985248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.985282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.985400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.985434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.985618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.985651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.985930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.985972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.986246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.986279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.986461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.986495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.986765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.986799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.986989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.987024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.987167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.987201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.987478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.987511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.987702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.987736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.988000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.988035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.988226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.988259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.988549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.988582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.988855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.988890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.989176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.989211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.989413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.989446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.989650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.989683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.989965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.990000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.990249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.990283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.990565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.990599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.990850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.990883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.991083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.991119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.991327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.991360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.991510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.991543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.991753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.991785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.992085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.992119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.992382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.992422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.992639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.992671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.992862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.992896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.993166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.993200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.993401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.993435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.993685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.993718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.993903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.993937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.994225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.994259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.994463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.994497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.994799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.994832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.995072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.995107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.995314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.995349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.995539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.995571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.995789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.995822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.996048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.996084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.909 [2024-11-19 13:19:49.996238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.909 [2024-11-19 13:19:49.996272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.909 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:49.996552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:49.996585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:49.996766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:49.996801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:49.997077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:49.997111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:49.997393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:49.997427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:49.997637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:49.997671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:49.997939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:49.997983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:49.998261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:49.998295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:49.998510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:49.998545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:49.998828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:49.998861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:49.999000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:49.999035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:49.999229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:49.999263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:49.999545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:49.999581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:49.999855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:49.999888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.000168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.000203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.000486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.000519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.000828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.000862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.001142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.001178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.001464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.001499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.001774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.001809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.002094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.002129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.002337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.002372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.002628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.002662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.002939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.002986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.003286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.003320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.003543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.003584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.003713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.003748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.003968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.004004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.004282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.004316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.004501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.004535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.004736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.004771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.004969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.005004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.005240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.005273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.005472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.005507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.005712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.005745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.006014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.006049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.006282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.006317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.006545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.006578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.006831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.006867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.007182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.007217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.007364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.007398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.007675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.007710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.007986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.008021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.008305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.008338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.008525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.008559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.008829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.008863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.009072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.009107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.009300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.910 [2024-11-19 13:19:50.009335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.910 qpair failed and we were unable to recover it. 00:27:46.910 [2024-11-19 13:19:50.009570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.009604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.009855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.009890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.010156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.010191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.010473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.010508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.010791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.010825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.011029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.011065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.011249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.011282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.011498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.011533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.011669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.011703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.011922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.011966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.012264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.012298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.012603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.012638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.012902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.012937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.013155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.013190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.013488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.013521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.013789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.013823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.014109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.014144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.014425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.014466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.014742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.014776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.015047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.015082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.015384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.015420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.015697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.015731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.015849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.015883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.016085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.016120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.016375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.016409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.016576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.016611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.016866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.016900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.017068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.017105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.017324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.017359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.017610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.017644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.017913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.017989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.018367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.018440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.018753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.018801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.019252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.019297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.019613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.019680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.019922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.019980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.020190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.020224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.020547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.020582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.020866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.020902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.021135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.021172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.021466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.021502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.021786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.021822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.022048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.022085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.022365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.022400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.022678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.022714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.022907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.022942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.023155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.023191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.023468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.023504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.023807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.911 [2024-11-19 13:19:50.023844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.911 qpair failed and we were unable to recover it. 00:27:46.911 [2024-11-19 13:19:50.024134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.024170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.024436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.024471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.024759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.024794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.025014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.025051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.025311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.025345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.025580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.025615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.025848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.025883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.026102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.026139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.026360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.026402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.026623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.026658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.026876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.026911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.027147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.027185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.027399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.027434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.027726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.027762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.027984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.028020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.028220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.028255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.028461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.028497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.028760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.028796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.029103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.029140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.029422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.029458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.029710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.029745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.029996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.030033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.030308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.030343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.030602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.030637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.030833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.030868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.031107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.031142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.031352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.031388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.031526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.031561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.031854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.031890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.032127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.032163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.032358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.032393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.032673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.032709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.032916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.032964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.033221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.033259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.033514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.033550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.033812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.033849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.034060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.034097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.034347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.034382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.034635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.034669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.034926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.034970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.035164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.035200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.035397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.035432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.035623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.035657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.035881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.035916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.036231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.036268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.036563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.036598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.036895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.036931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.037151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.037186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.037385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.037425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.037729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.037765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.912 qpair failed and we were unable to recover it. 00:27:46.912 [2024-11-19 13:19:50.037997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.912 [2024-11-19 13:19:50.038034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.038243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.038277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.038473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.038509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.038789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.038826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.039131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.039168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.039373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.039408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.039553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.039588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.039789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.039823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.040138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.040172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.040403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.040439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.040727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.040761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.041032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.041067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.041340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.041375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.041666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.041699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.041920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.041980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.042287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.042320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.042538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.042571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.042849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.042883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.043095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.043130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.043318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.043353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.043557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.043591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.043844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.043879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.044014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.044051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.044305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.044340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.044598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.044632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.044931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.044977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.045258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.045292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.045496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.045531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.045719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.045754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.045975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.046011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.046288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.046322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.046508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.046542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.046666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.046701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.046973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.047008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.047301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.047335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.047581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.047614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.047836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.047870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.048078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.048114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.048392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.048432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.048636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.048670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.048968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.049003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.049305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.049338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.049532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.049566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.049761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.049796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.913 [2024-11-19 13:19:50.050076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.913 [2024-11-19 13:19:50.050112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.913 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.050301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.050335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.050485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.050518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.050660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.050694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.050882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.050916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.051237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.051316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.051485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.051523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.051784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.051817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.052025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.052061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.052320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.052354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.052548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.052582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.052713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.052748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.052973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.053010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.053156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.053191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.053393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.053426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.053621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.053657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.053803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.053837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.054092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.054129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.054385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.054419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.054554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.054587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.054843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.054876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.055163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.055199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.055398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.055432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.055636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.055670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.055926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.055973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.056183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.056233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.056477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.056590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.056728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18daaf0 (9): Bad file descriptor 00:27:46.914 [2024-11-19 13:19:50.060271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.060349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.060567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.060604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.060812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.060845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.061051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.061086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.061288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.061321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.061547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.061580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.061781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.061814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.062034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.062070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.062346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.062378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.062515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.062547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.062679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.062711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.062992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.063027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.063279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.063312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.063503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.063535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.063777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.063810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.063941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.063984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.064188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.064220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.064358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.064390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.064517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.064549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.064689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.064721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.064895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.064934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.065144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.065177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.065430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.065462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.914 qpair failed and we were unable to recover it. 00:27:46.914 [2024-11-19 13:19:50.065667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.914 [2024-11-19 13:19:50.065699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.065976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.066010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.066125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.066157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.066296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.066329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.066533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.066566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.066760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.066792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.067005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.067038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.067264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.067297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.067532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.067565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.067749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.067781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.068041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.068076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.068340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.068372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.068568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.068601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.068861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.068894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.069156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.069189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.069381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.069413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.069559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.069592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.069709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.069741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.069934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.069976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.070273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.070307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.070503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.070536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.070746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.070778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.070975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.071008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.071138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.071170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.071449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.071482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.071672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.071704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.071838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.071871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.072055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.072088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.072269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.072301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.072424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.072456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.072568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.072602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.072788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.072819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.073081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.073114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.073262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.073295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.073500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.073532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.073670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.073702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.073892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.073926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.074054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.074092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.074344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.074377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.074580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.074612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.074900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.074933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.075079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.075112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.075312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.075345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.075571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.075604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.075830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.075862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.076085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.076119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.076344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.076377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.076634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.076667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.076966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.077000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.077194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.915 [2024-11-19 13:19:50.077228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.915 qpair failed and we were unable to recover it. 00:27:46.915 [2024-11-19 13:19:50.077426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.077459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.077649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.077681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.077818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.077851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.077975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.078008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.078202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.078234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.078462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.078494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.078634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.078668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.078807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.078839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.078956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.078989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.079222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.079254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.079385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.079420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.079599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.079632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.079820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.079852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.079991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.080025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.080233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.080267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.080490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.080523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.080720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.080753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.080982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.081016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.081192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.081225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.081361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.081394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.081687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.081718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.081987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.082019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.082206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.082240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.082467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.082499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.082689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.082722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.083006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.083041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.083237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.083269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.083447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.083484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.083734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.083768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.083910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.083942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.084108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.084140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.084307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.084341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.084531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.084564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.084705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.084738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.085016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.085049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.085253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.085285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.085537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.085569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.085763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.085796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.085968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.086001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.086196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.086229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.086339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.086371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.086575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.086606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.086879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.086912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.087063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.087095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.087353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.087386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.087513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.087545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.916 [2024-11-19 13:19:50.087758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.916 [2024-11-19 13:19:50.087790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.916 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.087919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.087982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.088189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.088222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.088408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.088440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.088662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.088694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.088971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.089006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.089256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.089288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.089504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.089537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.089803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.089878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.090171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.090247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.090417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.090453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.090650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.090682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.090892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.090925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.091087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.091120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.091311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.091343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.091542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.091574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.091806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.091838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.092027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.092061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.092208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.092240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.092509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.092542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.092811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.092844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.093078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.093112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.093370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.093404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.093606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.093655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.093904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.093937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.094207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.094240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.094533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.094564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.094835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.094868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.095072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.095105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.095366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.095397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.095594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.095626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.095823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.095855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.096055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.096088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.096340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.096371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.096615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.096648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.096959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.096998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.097202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.097234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.097481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.097514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.097702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.097734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.098022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.098056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.098262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.098294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.098480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.098511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.098756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.098788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.098969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.099002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.099248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.099280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.099489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.099523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.099791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.099823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.100022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.100055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.100336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.100376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.100606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.100639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.917 qpair failed and we were unable to recover it. 00:27:46.917 [2024-11-19 13:19:50.100839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.917 [2024-11-19 13:19:50.100874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.101132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.101165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.101315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.101347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.101623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.101655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.101855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.101886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.102112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.102146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.102428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.102459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.102772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.102805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.103080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.103114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.103343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.103376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.103673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.103705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.103894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.103926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.104199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.104233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.104456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.104491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.104692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.104724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.105003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.105037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.105237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.105269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.105459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.105492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.105670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.105701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.105962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.105995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.106174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.106207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.106479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.106511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.106755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.106787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.106927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.106966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.107095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.107128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.107341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.107382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.107705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.107737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.108016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.108049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.108352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.108384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.108640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.108672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.108806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.108864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.109184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.109253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.109455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.109531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.109866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.109925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.110186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.110227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.110429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.110465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.110800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.110834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.110991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.111025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.111259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.111294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.111498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.111532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.111814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.111846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.112050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.112084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.112360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.112392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.112634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.112667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.112857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.112889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.113154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.113186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.113377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.113409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.113701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.113733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.113927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.113968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.114192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.114224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.918 [2024-11-19 13:19:50.114414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.918 [2024-11-19 13:19:50.114446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.918 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.114643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.114676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.114881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.114913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.115116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.115149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.115348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.115380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.115591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.115623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.115820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.115852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.116036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.116069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.116315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.116347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.116542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.116575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.116751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.116783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.116975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.117008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.117151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.117184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.117408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.117441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.117764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.117796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.118088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.118127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.118346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.118377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.118687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.118718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.118980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.119014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.119162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.119193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.119389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.119421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.119606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.119638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.119815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.119847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.120045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.120078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.120257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.120288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.120469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.120500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.120776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.120807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.121068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.121102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.121348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.121380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.121529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.121560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.121760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.121792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.121990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.122023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.122223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.122254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.122394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.122426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.122639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.122670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.122936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.122984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.123122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.123154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.123337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.123369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.123640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.123672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.123918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.123957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.124254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.124286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.124499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.124531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.124771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.124803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.125073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.125106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.125308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.125339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.125514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.125545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.125815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.125847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.125994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.126027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.126325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.126356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.126552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.126583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.919 qpair failed and we were unable to recover it. 00:27:46.919 [2024-11-19 13:19:50.126860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.919 [2024-11-19 13:19:50.126891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.127110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.127141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.127390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.127421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.127673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.127705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.128002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.128035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.128187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.128224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.128446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.128478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.128623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.128654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.128849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.128881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.129047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.129080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.129287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.129318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.129519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.129551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.129739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.129771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.129968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.130001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.130154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.130185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.130334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.130365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.130666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.130697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.130892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.130925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.131101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.131133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.131340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.131372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.131566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.131599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.131783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.131815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.132004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.132036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.132194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.132226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.132472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.132503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.132772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.132803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.133050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.133083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.133282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.133314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.133435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.133466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.133683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.133714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.133986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.134018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.134297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.134328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.134664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.134696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.134827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.134859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.135050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.135083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.135213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.135245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.135520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.135553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.135731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.135763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.136011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.136044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.136336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.136367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.136512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.136543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.136717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.136749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.136960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.136993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.137203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.137236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.137425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.137457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.920 [2024-11-19 13:19:50.137663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.920 [2024-11-19 13:19:50.137699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.920 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.137897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.137928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.138209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.138242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.138437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.138469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.138672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.138704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.138958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.138990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.139288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.139321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.139468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.139499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.139778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.139810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.140011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.140043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.140294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.140325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.140529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.140560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.140752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.140782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.141019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.141053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.141203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.141235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.141505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.141537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.141740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.141772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.141894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.141925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.142164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.142196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.142348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.142380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.142580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.142611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.142824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.142856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.143001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.143033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.143225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.143256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.143452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.143485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.143720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.143751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.143871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.143904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.144082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.144116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.144326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.144357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.144553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.144584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.144823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.144854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.145068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.145102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.145295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.145326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.145475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.145528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.145734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.145766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.146061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.146094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.146289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.146320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.146591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.146622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.146915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.146954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.147095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.147127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.147328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.147366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.147649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.147680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.147891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.147922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.148206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.148239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.148432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.148464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.148619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.148651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.148788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.148820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.149045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.149079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.149329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.149361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.149602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.921 [2024-11-19 13:19:50.149634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.921 qpair failed and we were unable to recover it. 00:27:46.921 [2024-11-19 13:19:50.149905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.149936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.150222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.150254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.150444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.150474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.150620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.150652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.150983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.151017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.151162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.151193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.151432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.151464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.151606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.151639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.151832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.151864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.152151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.152184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.152326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.152358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.152561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.152592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.152745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.152776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.153032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.153066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.153279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.153310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.153610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.153642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.153908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.153940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.154108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.154140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.154321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.154353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.154638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.154669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.154991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.155024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.155148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.155179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.155303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.155334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.155678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.155710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.155995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.156029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.156176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.156209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.156431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.156464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.156722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.156754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.156958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.156990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.157133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.157165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.157291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.157329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.157524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.157555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.157774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.157806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.158072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.158106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.158239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.158270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.158466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.158498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.158773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.158806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.159101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.159134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.159347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.159378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.159579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.159611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.159813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.159845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.160050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.160082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.160292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.160324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.160637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.160669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.160958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.160991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.161225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.161257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.161531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.161563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.161755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.161787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.162074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.162107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.922 [2024-11-19 13:19:50.162300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.922 [2024-11-19 13:19:50.162332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.922 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.162467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.162498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.162798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.162830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.162966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.162999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.163112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.163143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.163332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.163364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.163663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.163695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.163897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.163929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.164091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.164123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.164328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.164360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.164618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.164650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.164908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.164940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.165205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.165237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.165435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.165467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.165746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.165778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.166053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.166086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.166234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.166267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.166451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.166482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.166716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.166748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.167050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.167084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.167244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.167276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.167422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.167460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.167738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.167771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.167975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.168009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.168283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.168316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.168642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.168674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.168975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.169007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.169251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.169283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.169435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.169467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.169751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.169783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.169991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.170024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.170229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.170261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.170470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.170501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.170819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.170851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.171077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.171109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.171312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.171345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.171565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.171597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.171789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.171821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.172009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.172042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.172198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.172229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.172379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.172412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.172521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.172553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.172829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.172861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.173064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.173098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.173311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.923 [2024-11-19 13:19:50.173343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.923 qpair failed and we were unable to recover it. 00:27:46.923 [2024-11-19 13:19:50.173547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.173579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.173804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.173836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.174043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.174075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.174289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.174321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.174639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.174671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.174867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.174898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.175202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.175234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.175447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.175479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.175675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.175707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.175983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.176017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.176223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.176255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.176393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.176425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.176659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.176692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.176966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.177000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.177234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.177265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.177524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.177555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.177852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.177890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.178074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.178108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.178337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.178369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.178662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.178693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.178979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.179012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.179231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.179263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.179480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.179512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.179831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.179862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.180129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.180161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.180310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.180342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.180629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.180662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.180941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.180981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.181263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.181295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.181567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.181600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.181896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.181928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.182161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.182192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.182465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.182497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.182868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.182900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.183136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.183169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.183484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.183516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.183716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.183747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.183903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.183934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.184239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.184271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.184478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.184510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.184693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.184724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.184936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.184979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.185232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.185263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.185484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.185515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.185707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.185740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.185920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.185978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.186207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.186239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.186461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.186492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.186776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.186807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.924 qpair failed and we were unable to recover it. 00:27:46.924 [2024-11-19 13:19:50.187007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.924 [2024-11-19 13:19:50.187041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.187274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.187305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.187509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.187541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.187812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.187845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.188067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.188099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.188214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.188245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.188520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.188552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.188679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.188716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.188866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.188898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.189142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.189175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.189401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.189434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.189617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.189648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.189845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.189876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.190142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.190175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.190468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.190500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.190776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.190807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.191018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.191052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.191257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.191288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.191424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.191456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.191674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.191706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.191882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.191914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.192139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.192172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.192384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.192417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.192755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.192788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.193111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.193143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.193421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.193453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.193754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.193785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.193984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.194017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.194169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.194200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.194412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.194444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.194674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.194705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.194932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.194976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.195259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.195290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.195431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.195462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.195665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.195698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.195942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.195986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.196127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.196159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.196436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.196468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.196780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.196811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.197122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.197157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.197377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.197409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.197662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.197693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.197979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.198012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.198222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.198255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.198513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.198544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.198763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.198795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.199048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.199081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.199351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.199389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.199660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.199692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.199991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.200024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.925 [2024-11-19 13:19:50.200173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.925 [2024-11-19 13:19:50.200204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.925 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.200336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.200368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.200506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.200538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.200692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.200723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.201004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.201040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.201249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.201281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.201506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.201538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.201671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.201702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.201963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.201996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.202131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.202163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.202376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.202408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.202624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.202656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.202934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.202974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.203197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.203229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.203433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.203465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.203672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.203704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.203963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.203997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.204134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.204165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.204348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.204380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.204523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.204555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.204718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.204750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.205004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.205038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.205248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.205280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.205482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.205514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.205789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.205822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.206004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.206037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.206318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.206350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.206600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.206632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.206889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.206922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.207154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.207188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.207409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.207441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.207698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.207730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.207956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.207990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.208184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.208216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.208358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.208390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.208591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.208623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.208848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.208880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.209070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.209110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.209294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.209327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.209518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.209550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.209741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.209773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.210097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.210130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.210328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.210360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.210565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.210597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.210714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.210745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.211016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.211050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.211254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.211287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.211482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.211514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.211812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.211844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.926 [2024-11-19 13:19:50.212052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.926 [2024-11-19 13:19:50.212085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.926 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.212340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.212372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.212658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.212690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.212891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.212924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.213124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.213157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.213414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.213446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.213575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.213607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.213882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.213914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.214129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.214162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.214345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.214378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.214524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.214555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.214850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.214882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.215050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.215084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.215344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.215376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.215670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.215701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.215984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.216018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.216222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.216254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.216381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.216413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.216713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.216745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.217036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.217069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.217218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.217251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.217456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.217488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.217796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.217828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.218085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.218118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.218336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.218369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.218598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.218631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.218831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.218865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.219071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.219105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.219302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.219341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.219545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.219578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.219904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.219935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.220077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.220109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.220307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.220339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.220553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.220585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.220799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.220832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.221119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.221152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.221428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.221461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.221666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.221698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.221920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.221959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.222165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.222198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.222402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.222434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.222699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.222731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.222931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.222974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.223191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.223224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.223368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.223400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.223604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.927 [2024-11-19 13:19:50.223635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.927 qpair failed and we were unable to recover it. 00:27:46.927 [2024-11-19 13:19:50.223773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.223805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.224024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.224057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.224255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.224288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.224507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.224540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.224688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.224720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.224975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.225009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.225259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.225292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.225500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.225532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.225754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.225786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.226046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.226080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.226317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.226349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.226654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.226687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.226929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.226981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.227168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.227201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.227422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.227455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.227679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.227711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.227996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.228031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.228256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.228288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.228493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.228525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.228775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.228807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.229052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.229085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.229305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.229338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.229476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.229514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.229710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.229741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.229935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.229976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.230133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.230166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.230362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.230394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.230670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.230703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.230901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.230932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.231058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.231091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.231252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.231284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.231425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.231457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.231686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.231719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.232001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.232034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.232193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.232225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.232372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.232404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.232704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.232736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.232859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.232891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.233101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.233134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.233443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.233476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.233690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.233723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.234003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.234037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.234240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.234272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.234523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.234556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.234869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.234902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.235110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.235142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.235397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.235430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.235734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.235766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.236044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.236077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.236237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.928 [2024-11-19 13:19:50.236269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.928 qpair failed and we were unable to recover it. 00:27:46.928 [2024-11-19 13:19:50.236546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.236578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.236855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.236887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.237118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.237151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.237350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.237382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.237525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.237557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.237768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.237800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.238011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.238044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.238227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.238259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.238464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.238496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.238797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.238830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.239087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.239120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.239328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.239360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.239671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.239703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.239902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.239935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.240095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.240127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.240342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.240374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.240569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.240601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.240745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.240777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.240987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.241021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.241161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.241193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.241445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.241477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.241705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.241738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.241928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.241970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.242151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.242183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.242339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.242371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.242505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.242537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.242688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.242720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.243017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.243051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.243259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.243291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.243432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.243463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.243808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.243840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.244032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.244066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.244211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.244243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.244494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.244526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.244816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.244850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.245126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.245161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3004176 Killed "${NVMF_APP[@]}" "$@" 00:27:46.929 [2024-11-19 13:19:50.245374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.245408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.245777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.245809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:46.929 [2024-11-19 13:19:50.246122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.246164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.246318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.246352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:46.929 [2024-11-19 13:19:50.246497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.246531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:46.929 [2024-11-19 13:19:50.246824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.246859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:46.929 [2024-11-19 13:19:50.247059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.247094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.247300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.247335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.929 [2024-11-19 13:19:50.247583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.247615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.247814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.247846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.248008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.248042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.929 [2024-11-19 13:19:50.248226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.929 [2024-11-19 13:19:50.248259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.929 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.248513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.248547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.248774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.248807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.249013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.249047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.249269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.249303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.249439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.249472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.249672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.249705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.249939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.249979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.250130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.250162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.250361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.250394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.250665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.250698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.250911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.250944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.251158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.251190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.251337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.251369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.251654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.251687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.251938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.251981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.252140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.252183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.252380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.252413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.252663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.252695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.252905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.252938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.253220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.253252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.253473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.253506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.253801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.253833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.254129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.254163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.254381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.254413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.254626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.254659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3004936 00:27:46.930 [2024-11-19 13:19:50.254859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.254894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.255066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.255099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3004936 00:27:46.930 [2024-11-19 13:19:50.255246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:46.930 [2024-11-19 13:19:50.255282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.255473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.255505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3004936 ']' 00:27:46.930 [2024-11-19 13:19:50.255707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.255741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.930 [2024-11-19 13:19:50.256021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.256056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:46.930 [2024-11-19 13:19:50.256337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.256371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.930 [2024-11-19 13:19:50.256603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.256638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.256781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.256820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:46.930 [2024-11-19 13:19:50.257048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.257084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.930 [2024-11-19 13:19:50.257290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.257326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.257581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.257613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.257845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.257878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.258199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.258234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.258529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.258562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.258756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.258789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.259009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.930 [2024-11-19 13:19:50.259043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.930 qpair failed and we were unable to recover it. 00:27:46.930 [2024-11-19 13:19:50.259155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.259189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.259494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.259527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.259728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.259761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.259883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.259916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.260162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.260196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.260411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.260442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.260755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.260788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.261044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.261078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.261274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.261315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.261526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.261561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.261767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.261801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.261914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.261954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.262103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.262137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.262340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.262372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.262503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.262535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.262758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.262791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.262999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.263033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.263318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.263351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.263497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.263531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.263808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.263840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.264006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.264040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.264199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.264233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.264466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.264498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.264722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.264755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.264897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.264933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.265223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.265257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.265443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.265476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.265743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.265777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.265926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.265968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.266128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.266161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.266373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.266408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.266550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.266583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.266880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.266914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.267150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.267184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.267398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.267432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.267648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.267688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:46.931 qpair failed and we were unable to recover it. 00:27:46.931 [2024-11-19 13:19:50.267931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.931 [2024-11-19 13:19:50.267996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.268186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.210 [2024-11-19 13:19:50.268220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.268498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.210 [2024-11-19 13:19:50.268534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.268723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.210 [2024-11-19 13:19:50.268755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.268970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.210 [2024-11-19 13:19:50.269005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.269235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.210 [2024-11-19 13:19:50.269268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.269400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.210 [2024-11-19 13:19:50.269435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.269688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.210 [2024-11-19 13:19:50.269721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.269864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.210 [2024-11-19 13:19:50.269899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.270153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.210 [2024-11-19 13:19:50.270189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.270392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.210 [2024-11-19 13:19:50.270424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.270736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.210 [2024-11-19 13:19:50.270768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.271077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.210 [2024-11-19 13:19:50.271110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.271241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.210 [2024-11-19 13:19:50.271274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.271477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.210 [2024-11-19 13:19:50.271510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.271726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.210 [2024-11-19 13:19:50.271759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.272011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.210 [2024-11-19 13:19:50.272045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.272301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.210 [2024-11-19 13:19:50.272333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.272565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.210 [2024-11-19 13:19:50.272598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.272794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.210 [2024-11-19 13:19:50.272826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.273071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.210 [2024-11-19 13:19:50.273106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.273311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.210 [2024-11-19 13:19:50.273345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.273577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.210 [2024-11-19 13:19:50.273610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.273802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.210 [2024-11-19 13:19:50.273834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.274086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.210 [2024-11-19 13:19:50.274121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.210 qpair failed and we were unable to recover it. 00:27:47.210 [2024-11-19 13:19:50.274314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.274348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.274585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.274617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.274747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.274778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.274989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.275023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.275336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.275369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.275649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.275682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.275861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.275894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.276122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.276157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.276459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.276492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.276744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.276778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.276987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.277021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.277273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.277307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.277488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.277524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.277715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.277752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.277961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.278002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.278258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.278290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.278498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.278531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.278779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.278811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.279032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.279066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.279280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.279313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.279518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.279555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.279832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.279866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.280008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.280042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.280341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.280376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.280564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.280596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.280732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.280764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.280963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.280996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.281252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.281284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.281573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.281606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.281820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.281852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.282012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.282045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.282258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.282290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.282502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.282534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.282803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.282836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.283100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.283134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.283274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.283306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.283437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.283469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.211 [2024-11-19 13:19:50.283608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.211 [2024-11-19 13:19:50.283642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.211 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.283799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.283832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.284058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.284093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.284217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.284248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.284448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.284481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.284734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.284768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.285056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.285090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.285238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.285271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.285500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.285534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.285724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.285756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.285970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.286005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.286164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.286197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.286342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.286374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.286514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.286549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.286760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.286795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.287036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.287069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.287268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.287301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.287421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.287460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.287662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.287694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.287819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.287852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.288067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.288103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.288239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.288272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.288397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.288429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.290091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.290154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.290385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.290419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.290616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.290649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.290801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.290834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.290976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.291010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.291144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.291177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.291301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.291336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.291471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.291503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.291665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.291701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.291913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.291946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.292162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.292195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.292413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.292447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.292602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.292636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.292889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.292922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.293057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.212 [2024-11-19 13:19:50.293090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.212 qpair failed and we were unable to recover it. 00:27:47.212 [2024-11-19 13:19:50.293239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.293271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.293401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.293434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.293662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.293694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.293819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.293851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.294038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.294072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.294266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.294299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.294443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.294476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.294673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.294708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.294846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.294878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.295080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.295114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.295334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.295366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.295562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.295594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.295714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.295746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.295939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.295980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.296097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.296129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.296239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.296272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.296416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.296448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.296577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.296608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.296748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.296780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.297002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.297042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.297170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.297203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.297438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.297471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.297603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.297636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.297840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.297871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.298059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.298094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.298279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.298312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.298498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.298530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.298664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.298696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.298899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.298931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.299087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.299120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.299378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.299411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.299606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.299638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.299764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.299795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.299918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.299962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.300082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.300114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.300300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.300333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.300561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.300594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.213 [2024-11-19 13:19:50.300881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.213 [2024-11-19 13:19:50.300924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.213 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.301175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.301208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.301324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.301355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.301570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.301601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.301809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.301841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.302127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.302160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.302343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.302374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.302503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.302535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.302672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.302704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.302861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.302893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.303032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.303065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.303316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.303348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.303479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.303513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.303622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.303654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.303846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.303878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.304009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.304042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.304308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.304340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.304546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.304579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.304779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.304810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.305008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.305041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.305181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.305212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.305464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.305496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.305621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.305658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.305923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.305977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.306167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.306200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.306418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.306451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.306629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.306662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.306932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.306978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.307092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.307123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.307397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.307429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.307685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.307716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.307838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.307869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.214 qpair failed and we were unable to recover it. 00:27:47.214 [2024-11-19 13:19:50.308067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.214 [2024-11-19 13:19:50.308101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.308297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.308328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.308473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.308505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.308691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.308722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.308922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.308962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.309074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.309105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.309347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.309424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.309587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.309624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.309910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.309944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.310102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.310135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.310255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.310287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.310462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.310495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.310604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.310635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.310764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.310797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.310991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.311024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.311263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.311296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.311543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.311576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.311716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.311749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.311764] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:27:47.215 [2024-11-19 13:19:50.311842] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.215 [2024-11-19 13:19:50.311933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.311977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.312168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.312197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.312323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.312353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.312542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.312574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.312824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.312857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.313047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.313081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.313350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.313382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.313640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.313673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.313890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.313923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.314080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.314115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.314318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.314353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.314556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.314593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.314724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.314757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.314958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.314994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.315113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.315146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.215 qpair failed and we were unable to recover it. 00:27:47.215 [2024-11-19 13:19:50.315256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.215 [2024-11-19 13:19:50.315289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.315471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.315504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.315622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.315654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.315781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.315814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.315941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.315986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.316172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.316204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.316462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.316494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.316623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.316656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.316782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.316813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.317063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.317102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.317216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.317247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.317501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.317533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.317648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.317679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.317859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.317890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.318027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.318059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.318252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.318283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.318473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.318507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.318691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.318722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.318917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.318958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.319159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.319190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.319370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.319404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.319525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.319558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.319796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.319827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.319967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.320000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.320201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.320233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.320356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.320388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.320594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.320624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.320817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.320849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.320992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.321025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.321203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.321234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.321358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.321390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.321573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.321604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.321807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.321839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.321966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.322005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.322206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.322238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.322543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.322589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.322828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.322904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.323183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.216 [2024-11-19 13:19:50.323260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.216 qpair failed and we were unable to recover it. 00:27:47.216 [2024-11-19 13:19:50.323428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.323464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.323610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.323648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.323899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.323933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.324208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.324242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.324509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.324541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.324722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.324754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.324877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.324909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.325098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.325135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.325263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.325296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.325433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.325466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.325580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.325612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.325860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.325902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.326093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.326127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.326312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.326345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.326620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.326654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.326836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.326872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.327099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.327134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.327254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.327288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.327539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.327571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.327842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.327875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.328068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.328102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.328235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.328269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.328458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.328490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.328708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.328742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.329028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.329060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.329207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.329240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.329416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.329450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.329675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.329707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.329823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.329857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.330067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.330101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.330296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.330328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.330465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.330498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.330639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.330672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.330890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.330922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.331128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.331162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.331276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.331309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.331580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.331612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.331737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.217 [2024-11-19 13:19:50.331770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.217 qpair failed and we were unable to recover it. 00:27:47.217 [2024-11-19 13:19:50.332042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.332118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.332267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.332305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.332496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.332529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.332658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.332691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.332890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.332922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.333137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.333172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.333352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.333384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.333594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.333626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.333820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.333852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.334066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.334099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.334289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.334321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.334452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.334485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.334661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.334693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.334870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.334902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.335035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.335070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.335263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.335294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.335411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.335443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.335629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.335662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.335788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.335819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.335964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.335998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.336190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.336223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.336341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.336373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.336561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.336593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.336869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.336901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.337059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.337094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.337279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.337312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.337445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.337480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.337613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.337652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.337907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.337939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.338089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.338121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.338255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.338290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.338476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.338509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.338644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.338676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.338885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.338918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.339067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.339100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.339223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.339262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.339377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.339409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.339585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.339617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-11-19 13:19:50.339816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.218 [2024-11-19 13:19:50.339849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.340037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.340071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.340187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.340220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.340408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.340441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.340579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.340611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.340885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.340916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.341057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.341091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.341220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.341252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.341386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.341417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.341530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.341562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.341824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.341857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.342114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.342148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.342257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.342289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.342400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.342432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.342701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.342732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.342975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.343009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.343205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.343243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.343367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.343398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.343520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.343552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.343676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.343707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.343839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.343871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.343986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.344019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.344155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.344188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.344362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.344395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.344658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.344691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.344939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.344979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.345106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.345138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.345256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.345289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.345408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.345439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.345629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.345660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.345852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.345884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.346007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.346041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.346219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.346251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.346451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.346484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-11-19 13:19:50.346679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.219 [2024-11-19 13:19:50.346711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.346898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.346930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.347197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.347230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.347405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.347438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.347622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.347653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.347789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.347821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.348001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.348032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.348146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.348179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.348387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.348419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.348593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.348635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.348838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.348870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.349016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.349048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.349170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.349202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.349444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.349476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.349650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.349683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.349866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.349897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.350018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.350050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.350249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.350282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.350465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.350498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.350675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.350706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.350887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.350919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.351134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.351206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.351357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.351394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.351578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.351651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.351864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.351899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.352121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.352153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.352285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.352317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.352447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.352478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.352620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.352651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.352771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.352803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.352971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.353005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.353123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.353155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.353339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.353370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.353596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.353629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.353812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.353845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.354036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.354070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.354181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.354212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.354349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.354381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.220 [2024-11-19 13:19:50.354588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.220 [2024-11-19 13:19:50.354620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.220 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.354806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.354837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.355086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.355119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.355241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.355274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.355406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.355437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.355563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.355595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.355861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.355893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.356107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.356140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.356332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.356364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.356496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.356527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.356702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.356734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.356858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.356890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.357101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.357140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.357348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.357380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.357565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.357596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.357719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.357751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.357961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.357994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.358127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.358160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.358286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.358318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.358433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.358465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.358672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.358704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.358906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.358938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.359095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.359127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.359255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.359286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.359494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.359526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.359641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.359673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.359807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.359839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.359969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.360002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.360250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.360281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.360488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.360521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.360675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.360706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.360820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.360852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.360980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.361013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.361155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.361189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.361312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.361343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.361607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.361640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.361754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.361785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.361908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.361940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.221 [2024-11-19 13:19:50.362080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.221 [2024-11-19 13:19:50.362113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.221 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.362315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.362354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.362462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.362494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.362620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.362652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.362826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.362858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.362997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.363046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.363173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.363204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.363327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.363360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.363540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.363572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.363741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.363773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.363898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.363931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.364121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.364153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.364271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.364303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.364434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.364466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.364650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.364683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.364804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.364836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.365010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.365043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.365169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.365202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.365379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.365410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.365533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.365566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.365687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.365720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.365860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.365893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.366020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.366053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.366179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.366212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.366312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.366345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.366546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.366577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.366845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.366877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.367051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.367084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.367203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.367241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.367349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.367381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.367563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.367594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.367709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.367743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.367855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.367887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.368005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.368039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.368143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.368175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.368306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.368339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.368457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.222 [2024-11-19 13:19:50.368489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.222 qpair failed and we were unable to recover it. 00:27:47.222 [2024-11-19 13:19:50.368662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.368695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.368824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.368857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.368968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.369001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.369118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.369150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.369262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.369294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.369430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.369463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.369579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.369611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.369728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.369761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.369867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.369899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.370021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.370055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.370230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.370262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.370437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.370469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.370724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.370756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.370865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.370897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.371034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.371067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.371256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.371288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.371398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.371432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.371695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.371727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.371843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.371875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.372066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.372101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.372270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.372302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.372488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.372520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.372629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.372662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.372845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.372877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.373054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.373089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.373214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.373245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.373359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.223 [2024-11-19 13:19:50.373392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.223 qpair failed and we were unable to recover it. 00:27:47.223 [2024-11-19 13:19:50.373502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.224 [2024-11-19 13:19:50.373535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.224 qpair failed and we were unable to recover it. 00:27:47.224 [2024-11-19 13:19:50.373665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.224 [2024-11-19 13:19:50.373697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.224 qpair failed and we were unable to recover it. 00:27:47.224 [2024-11-19 13:19:50.373878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.224 [2024-11-19 13:19:50.373909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.224 qpair failed and we were unable to recover it. 00:27:47.224 [2024-11-19 13:19:50.374037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.224 [2024-11-19 13:19:50.374071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.224 qpair failed and we were unable to recover it. 00:27:47.224 [2024-11-19 13:19:50.374198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.224 [2024-11-19 13:19:50.374229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.224 qpair failed and we were unable to recover it. 00:27:47.224 [2024-11-19 13:19:50.374457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.224 [2024-11-19 13:19:50.374528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.224 qpair failed and we were unable to recover it. 00:27:47.224 [2024-11-19 13:19:50.374668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.224 [2024-11-19 13:19:50.374705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.224 qpair failed and we were unable to recover it. 00:27:47.224 [2024-11-19 13:19:50.374844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.224 [2024-11-19 13:19:50.374876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.224 qpair failed and we were unable to recover it. 00:27:47.224 [2024-11-19 13:19:50.375067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.224 [2024-11-19 13:19:50.375100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.224 qpair failed and we were unable to recover it. 00:27:47.224 [2024-11-19 13:19:50.375213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.224 [2024-11-19 13:19:50.375245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.224 qpair failed and we were unable to recover it. 00:27:47.224 [2024-11-19 13:19:50.375357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.224 [2024-11-19 13:19:50.375390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.224 qpair failed and we were unable to recover it. 00:27:47.224 [2024-11-19 13:19:50.375502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.224 [2024-11-19 13:19:50.375534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.224 qpair failed and we were unable to recover it. 00:27:47.224 [2024-11-19 13:19:50.375719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.224 [2024-11-19 13:19:50.375752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.224 qpair failed and we were unable to recover it. 00:27:47.224 [2024-11-19 13:19:50.375965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.224 [2024-11-19 13:19:50.376000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.224 qpair failed and we were unable to recover it. 00:27:47.224 [2024-11-19 13:19:50.376168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.224 [2024-11-19 13:19:50.376200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.224 qpair failed and we were unable to recover it. 00:27:47.224 [2024-11-19 13:19:50.376301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.224 [2024-11-19 13:19:50.376333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.224 qpair failed and we were unable to recover it. 00:27:47.224 [2024-11-19 13:19:50.376441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.224 [2024-11-19 13:19:50.376473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.224 qpair failed and we were unable to recover it. 00:27:47.224 [2024-11-19 13:19:50.376583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.224 [2024-11-19 13:19:50.376615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.224 qpair failed and we were unable to recover it. 00:27:47.224 [2024-11-19 13:19:50.376805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.224 [2024-11-19 13:19:50.376846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.224 qpair failed and we were unable to recover it. 00:27:47.224 [2024-11-19 13:19:50.377098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.224 [2024-11-19 13:19:50.377131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.224 qpair failed and we were unable to recover it. 00:27:47.224 [2024-11-19 13:19:50.377251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.224 [2024-11-19 13:19:50.377283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.224 qpair failed and we were unable to recover it. 00:27:47.224 [2024-11-19 13:19:50.377394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.224 [2024-11-19 13:19:50.377426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.224 qpair failed and we were unable to recover it. 00:27:47.224 [2024-11-19 13:19:50.377543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.224 [2024-11-19 13:19:50.377574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.224 qpair failed and we were unable to recover it. 00:27:47.224 [2024-11-19 13:19:50.377694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.224 [2024-11-19 13:19:50.377725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.224 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.377840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.377872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.377993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.378026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.378157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.378190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.378314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.378346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.378559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.378590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.378774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.378806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.379076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.379109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.379292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.379325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.379470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.379502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.379679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.379711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.379911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.379943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.380083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.380115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.380226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.380259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.380375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.380407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.380525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.380556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.380741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.380773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.380900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.380930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.381055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.381088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.381220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.381252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.381439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.381471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.381588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.381620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.381876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.381945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.382197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.382234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.382344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.382377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.382499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.382530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.382642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.382674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.382870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.382901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.383047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.383080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.383206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.383238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.383462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.383495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.383604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.383636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.383808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.383840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.383979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.384013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.384200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.384232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.384479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.384520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.384654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.384687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.225 [2024-11-19 13:19:50.384862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.225 [2024-11-19 13:19:50.384894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.225 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.385141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.385174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.385295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.385327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.385455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.385487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.385677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.385709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.385830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.385862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.386101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.386134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.386264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.386297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.386411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.386444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.386577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.386610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.386799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.386831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.387030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.387063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.387255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.387287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.387393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.387425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.387639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.387671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.387780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.387812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.387933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.387993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.388188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.388220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.388333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.388365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.388546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.388578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.388767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.388799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.388912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.388944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.389088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.389121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.389296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.389328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.389447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.389480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.389658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.389697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.389804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.389837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.389967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.390001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.390198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.390232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.390409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.390441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.390638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.390670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.390844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.390877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.390998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.391032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.391143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.391175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.391299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.391343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.391528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.391560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.391737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.391769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.391889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.226 [2024-11-19 13:19:50.391921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.226 qpair failed and we were unable to recover it. 00:27:47.226 [2024-11-19 13:19:50.392066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.392099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.392230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.392264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.392522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.392553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.392670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.392703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.392808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.392841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.392963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.392996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.393116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.393149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.393276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.393309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.393430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.393462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.393569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.393601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.393713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.393745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.393851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.393883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.394000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.394039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.394147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.394179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.394293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.394324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.394439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.394471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.394659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.394690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.394821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.394853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.394969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.395004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.395180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.395212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.395314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.395345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.395455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.395487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.395612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.395644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.395815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.395847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.395989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.396022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.396198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.396230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.396347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.396379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.396622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.396660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.396782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.396814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.396932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.396973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.397154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.397187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.397314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.397345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:47.227 [2024-11-19 13:19:50.397348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.397532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.397564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.397669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.397702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.397821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.397852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.397982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.398014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.227 [2024-11-19 13:19:50.398230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.227 [2024-11-19 13:19:50.398262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.227 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.398381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.398413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.398521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.398554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.398676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.398708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.398834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.398871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.399000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.399034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.399160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.399191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.399314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.399346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.399453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.399484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.399665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.399697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.399877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.399908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.400060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.400093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.400202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.400234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.400360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.400392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.400527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.400560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.400666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.400698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.400804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.400836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.400966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.401000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.401207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.401240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.401425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.401457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.401632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.401665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.401786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.401818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.401986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.402019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.402151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.402188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.402369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.402400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.402607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.402639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.402757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.402790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.402912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.402944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.403227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.403259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.403377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.403409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.403525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.403557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.403737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.403769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.404015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.404049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.404183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.404215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.404397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.404429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.404560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.404592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.404768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.404801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.404995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.228 [2024-11-19 13:19:50.405028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.228 qpair failed and we were unable to recover it. 00:27:47.228 [2024-11-19 13:19:50.405168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.405200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.405317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.405350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.405461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.405494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.405624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.405656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.405836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.405868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.406056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.406089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.406221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.406258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.406367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.406400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.406516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.406548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.406662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.406694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.406799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.406832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.406937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.406990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.407097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.407129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.407313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.407346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.407455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.407488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.407597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.407629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.407747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.407781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.407973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.408006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.408134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.408167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.408287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.408320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.408445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.408478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.408656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.408689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.408803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.408835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.408936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.408979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.409098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.409131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.409309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.409342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.409532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.409565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.409810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.409843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.410031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.410064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.410177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.410210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.410314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.410345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.410550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.410582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.410698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.410729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.410861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.229 [2024-11-19 13:19:50.410893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.229 qpair failed and we were unable to recover it. 00:27:47.229 [2024-11-19 13:19:50.411021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.411055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.411227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.411259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.411430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.411463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.411581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.411614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.411783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.411814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.411946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.411991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.412096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.412128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.412243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.412274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.412400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.412432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.412610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.412642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.412831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.412863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.412989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.413039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.413162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.413200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.413320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.413352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.413472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.413505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.413612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.413644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.413762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.413794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.413902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.413934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.414132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.414164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.414339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.414370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.414475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.414507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.414621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.414653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.414773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.414804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.414988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.415021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.415147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.415179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.415283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.415315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.415457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.415490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.415665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.415697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.415825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.415857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.416038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.416072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.416185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.416217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.416343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.416374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.416479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.416511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.416617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.416648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.416762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.416795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.416912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.416945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.417133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.417163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.230 [2024-11-19 13:19:50.417340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.230 [2024-11-19 13:19:50.417369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.230 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.417471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.417500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.417610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.417640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.417749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.417778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.417981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.418011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.418133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.418162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.418286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.418315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.418416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.418445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.418550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.418579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.418817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.418846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.418945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.419004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.419114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.419145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.419312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.419340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.419520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.419549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.419727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.419758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.419854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.419888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.420009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.420039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.420141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.420170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.420349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.420379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.420472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.420501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.420602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.420632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.420734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.420763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.420868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.420897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.421004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.421034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.421204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.421234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.421335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.421364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.421532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.421561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.421668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.421698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.421872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.421902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.422047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.422078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.422178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.422206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.422373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.422403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.422537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.422566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.422675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.422705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.422821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.422851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.423021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.423052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.423152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.423181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.231 [2024-11-19 13:19:50.423297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.231 [2024-11-19 13:19:50.423327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.231 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.423429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.423458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.423623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.423653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.423754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.423783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.423889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.423919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.424030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.424060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.424292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.424321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.424510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.424539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.424675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.424704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.424806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.424836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.424932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.424969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.425142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.425171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.425334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.425364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.425476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.425505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.425627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.425656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.425751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.425780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.425885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.425914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.426016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.426046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.426145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.426179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.426384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.426414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.426524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.426554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.426656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.426685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.426918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.426973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.427086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.427115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.427221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.427259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.427422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.427449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.427542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.427569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.427683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.427711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.427810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.427836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.427942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.427979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.428076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.428103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.428287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.428315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.428416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.428443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.428541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.428568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.428666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.428693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.428785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.428812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.428905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.428931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.429046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.429073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.232 [2024-11-19 13:19:50.429167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.232 [2024-11-19 13:19:50.429194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.232 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.429288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.429315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.429480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.429506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.429608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.429635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.429729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.429756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.429874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.429900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.430015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.430043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.430267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.430342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.430469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.430503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.430656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.430688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.430861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.430895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.431092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.431125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.431301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.431333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.431447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.431479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.431603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.431636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.431826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.431858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.431991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.432021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.432181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.432208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.432311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.432338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.432470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.432497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.432727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.432759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.432870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.432896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.433003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.433031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.433131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.433167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.433329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.433361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.433495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.433522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.433632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.433659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.433883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.433909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.434024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.434053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.434253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.434280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.434390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.434417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.434525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.434552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.434667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.434693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.434787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.434813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.434931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.434989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.435162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.435188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.435289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.435315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.233 qpair failed and we were unable to recover it. 00:27:47.233 [2024-11-19 13:19:50.435417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.233 [2024-11-19 13:19:50.435445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.435557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.435583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.435685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.435712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.435819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.435845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.436076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.436104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.436296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.436324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.436423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.436450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.436562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.436590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.436754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.436782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.436903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.436930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.437071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.437121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.437249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.437283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.437400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.437432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.437617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.437650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.437818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.437851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.438065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.438102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.438238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.438270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.438447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.438479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.438598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.438632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.438767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.438799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.438994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.439029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.439143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.439155] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.234 [2024-11-19 13:19:50.439176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 wit[2024-11-19 13:19:50.439182] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of eventsh addr=10.0.0.2, port=4420 00:27:47.234 at runtime. 00:27:47.234 [2024-11-19 13:19:50.439194] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.234 [2024-11-19 13:19:50.439200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.439205] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.234 [2024-11-19 13:19:50.439367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.439401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.439618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.439650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.439769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.439801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.439915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.439986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.440110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.440142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.440259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.440289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.440457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.440489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.440662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.440693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.440804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.440835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.234 qpair failed and we were unable to recover it. 00:27:47.234 [2024-11-19 13:19:50.440881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:47.234 [2024-11-19 13:19:50.441032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.234 [2024-11-19 13:19:50.440978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:47.234 [2024-11-19 13:19:50.441066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.235 [2024-11-19 13:19:50.441065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.441066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:47.235 [2024-11-19 13:19:50.441173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.441205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.441394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.441426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.441606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.441638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.441768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.441801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.441931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.441970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.442084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.442116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.442290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.442322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.442431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.442462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.442591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.442624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.442749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.442782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.442887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.442919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.443044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.443083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.443199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.443230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.443357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.443389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.443563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.443595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.443721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.443765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.443897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.443931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.444087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.444121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.444311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.444342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.444530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.444561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.444739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.444772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.444960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.444993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.445117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.445148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.445276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.445308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.445423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.445455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.445633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.445667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.445843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.445876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.446054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.446087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.446290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.446330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.446507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.446538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.446734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.446767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.446909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.446941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.447077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.447109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.447295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.447329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.447515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.447546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.447659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.447691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.235 [2024-11-19 13:19:50.447877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.235 [2024-11-19 13:19:50.447916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.235 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.448101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.448134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.448328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.448361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.448555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.448586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.448760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.448793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.448998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.449031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.449156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.449190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.449300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.449332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.449474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.449506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.449631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.449663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.449775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.449808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.450007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.450039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.450231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.450263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.450467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.450500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.450626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.450658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.450779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.450812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.450918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.450958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.451073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.451104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.451292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.451324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.451565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.451612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.451722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.451753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.451879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.451911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.452098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.452131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.452235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.452267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.452501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.452534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.452659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.452690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.452861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.452893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.453120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.453154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.453337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.453368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.453479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.453513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.453630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.453662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.453902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.453935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.454086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.454120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.454302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.454334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.454510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.454541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.454823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.454856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.454981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.455015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.455121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.455152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.236 qpair failed and we were unable to recover it. 00:27:47.236 [2024-11-19 13:19:50.455256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.236 [2024-11-19 13:19:50.455288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.455492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.455526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.455648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.455679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.455850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.455881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.455996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.456029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.456220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.456253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.456371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.456404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.456521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.456553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.456763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.456797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.456909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.456941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.457063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.457095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.457273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.457305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.457475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.457511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.457624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.457656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.457786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.457818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.458005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.458037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.458308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.458341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.458467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.458500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.458758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.458791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.458973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.459007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.459183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.459215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.459339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.459378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.459498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.459531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.459661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.459694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.459874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.459907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.460186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.460221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.460413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.460446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.460646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.460678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.460910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.460944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.461082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.461115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.461307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.461339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.461656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.461689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.461860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.461893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.462100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.462132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.462317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.462350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.462667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.462701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.462925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.462969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.463100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.237 [2024-11-19 13:19:50.463131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.237 qpair failed and we were unable to recover it. 00:27:47.237 [2024-11-19 13:19:50.463371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.463404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.463734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.463767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.464027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.464060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.464176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.464208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.464387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.464422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.464683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.464716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.464853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.464884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.465086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.465123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.465334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.465366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.465563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.465596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.465740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.465773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.465888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.465921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.466125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.466160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.466338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.466371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.466561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.466593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.466714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.466747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.466979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.467012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.467200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.467233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.467351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.467382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.467606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.467638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.467886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.467919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.468203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.468236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.468428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.468460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.468582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.468622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.468929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.468971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.469165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.469198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.469384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.469416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.469673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.469704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.469889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.469920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.470104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.470138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.470270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.470301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.470473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.470504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.470688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.470721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.470918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.470961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.238 qpair failed and we were unable to recover it. 00:27:47.238 [2024-11-19 13:19:50.471149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.238 [2024-11-19 13:19:50.471182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.471370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.471403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.471589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.471622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.471819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.471852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.472030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.472064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.472254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.472286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.472465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.472495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.472677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.472710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.472880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.472912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.473112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.473146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.473283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.473316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.473503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.473535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.473814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.473845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.474051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.474085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.474274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.474306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.474406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.474437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.474664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.474699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.474897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.474930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.475225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.475258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.475538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.475569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.475770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.475802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.476053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.476086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.476324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.476356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.476549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.476581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.476768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.476800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.476931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.476971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.477149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.477181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.477318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.477349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.477459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.477491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.477679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.477716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.477906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.477938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.478070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.478101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.478305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.478336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.478506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.478538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.478721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.478752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.478930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.478970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.479152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.479183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.479312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.239 [2024-11-19 13:19:50.479342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.239 qpair failed and we were unable to recover it. 00:27:47.239 [2024-11-19 13:19:50.479468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.479500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.479615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.479647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.479839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.479870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.479995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.480027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.480222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.480253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.480388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.480419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.480547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.480577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.480785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.480815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.481037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.481071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.481263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.481294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.481411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.481444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.481585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.481616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.481875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.481906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.482125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.482158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.482421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.482452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.482579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.482610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.482851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.482883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.483005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.483037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.483277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.483310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.483502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.483533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.483647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.483679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.483917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.483957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.484102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.484133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.484386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.484418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.484604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.484634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.484823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.484856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.485050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.485084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.485206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.485237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.485384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.485416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.485595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.485627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.485754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.485786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.485891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.485930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.486166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.486198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.486383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.486415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.486555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.486586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.486720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.486752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.486889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.486920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.487051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.240 [2024-11-19 13:19:50.487085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.240 qpair failed and we were unable to recover it. 00:27:47.240 [2024-11-19 13:19:50.487194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.487226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.487330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.487361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.487481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.487512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.487622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.487653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.487840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.487871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.487976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.488009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.488274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.488307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.488541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.488574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.488830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.488862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.489114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.489148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.489361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.489394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.489642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.489673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.489884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.489916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.490146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.490207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.490405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.490438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.490736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.490767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.491006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.491038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.491185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.491218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.491345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.491376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.491502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.491533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.491730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.491762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.492015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.492047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.492234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.492266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.492456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.492488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.492755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.492786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.493075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.493109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.493347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.493379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.493572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.493604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.493778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.493812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.494080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.494114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.494249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.494281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.494424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.494460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.494573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.494608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.494885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.494925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.495150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.495183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.495391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.495423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.495692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.495730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.241 qpair failed and we were unable to recover it. 00:27:47.241 [2024-11-19 13:19:50.495923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.241 [2024-11-19 13:19:50.495964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.496195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.496228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.496485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.496522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.496709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.496744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.496989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.497023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.497242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.497277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.497476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.497508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.497744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.497776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.497971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.498004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.498195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.498226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.498379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.498410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.498750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.498781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.498966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.498999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.499135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.499167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.499449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.499480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.499614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.499645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.499904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.499935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.500205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.500236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.500492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.500524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.500775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.500806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.501062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.501094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.501382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.501413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.501548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.501579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.501764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.501795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.501966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.501998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.502184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.502216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.502369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.502399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.502719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.502750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.503066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.503098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.503275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.503306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.503482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.503513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.503797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.503829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.504016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.504048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.504230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.504261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.504461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.504493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.504783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.504814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.505058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.505097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.505288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.242 [2024-11-19 13:19:50.505320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.242 qpair failed and we were unable to recover it. 00:27:47.242 [2024-11-19 13:19:50.505502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.505534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.505784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.505815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.505989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.506022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.506207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.506238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.506377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.506408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.506590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.506621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.506812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.506843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.506970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.507002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.507269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.507301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.507486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.507517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.507756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.507787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.508008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.508041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.508185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.508216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.508345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.508376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.508500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.508531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.508703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.508733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.508849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.508880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.509020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.509053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.509181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.509213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.509345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.509377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.509670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.509702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.509883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.509914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.510168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.510201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.510342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.510373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.510515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.510546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.510741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.510794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.511069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.511102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.511240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.511272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.511477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.511509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.511625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.511657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.511899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.511930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.512143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.512176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.512301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.512333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.512484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.512515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.243 qpair failed and we were unable to recover it. 00:27:47.243 [2024-11-19 13:19:50.512697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.243 [2024-11-19 13:19:50.512729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.512969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.513002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.513266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.513299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.513443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.513474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.513679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.513718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.513906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.513939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.514134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.514167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.514339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.514371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.514479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.514510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.514687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.514718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.514851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.514883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.515009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.515042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.515253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.515285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.515478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.515510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.515706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.515737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.515917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.515957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.516086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.516119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.516264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.516294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.516498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.516530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.516649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.516680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.516797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.516828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.516972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.517004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.517194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.517225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.517418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.517449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.517650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.517681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.517916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.517955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.518087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.518119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.518265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.518296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.518403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.518434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.518603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.518634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.518804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.518836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a4000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.519001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.519047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.519230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.519261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.519375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.519406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.519708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.519739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.519871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.519902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.520210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.520243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.244 [2024-11-19 13:19:50.520366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.244 [2024-11-19 13:19:50.520397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.244 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.520584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.520615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.520787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.520819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.520942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.520987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.521103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.521134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.521342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.521373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.521499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.521531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.521662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.521706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.521890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.521921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.522037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.522069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.522202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.522233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.522402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.522433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.522602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.522634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.522757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.522788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.522977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.523009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.523269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.523300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.523515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.523546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.523745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.523776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.523941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.523981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.524246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.524277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.524456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.524487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.524731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.524762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.525028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.525060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.525248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.525279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.525451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.525483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.525737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.525768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.525957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.525990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.526181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.526212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.526472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.526503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.526739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.526771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.527037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.527070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.527311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.527342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.527603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.527635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.527842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.527874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.528089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.528140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.528443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.528475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.528774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.528805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.529066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.529099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.245 qpair failed and we were unable to recover it. 00:27:47.245 [2024-11-19 13:19:50.529350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.245 [2024-11-19 13:19:50.529381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.529568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.529601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.529867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.529898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.530042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.530074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.530296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.530327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.530512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.530543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.530777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.530809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.531074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.531106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.531285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.531317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.531578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.531610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.531821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.531853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.532089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.532121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.532307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.532338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.532544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.532575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.532767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.532798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.532935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.532979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.533168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.533199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.533388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.533418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.533620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.533651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.533767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.533798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.534066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.534099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.534337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.534370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.534510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.534541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.534714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.534752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.535010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.535042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.535280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.535313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.535580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.535611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.535846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.535878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.536139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.536172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.536413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.536445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.536679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.536710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.536847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.536879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.537080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.537112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.537362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.537393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.537581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.537613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.537792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.537823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.538008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.538041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.538237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.538269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.538393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.538424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.538749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.538781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.539003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.246 [2024-11-19 13:19:50.539036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.246 qpair failed and we were unable to recover it. 00:27:47.246 [2024-11-19 13:19:50.539220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.539252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.539433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.539465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.539659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.539690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.539816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.539848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.540020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.540053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.540293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.540325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.540505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.540536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.540774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.540806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.541065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.541097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.541285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.541322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.541513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.541545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.541824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.541856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.542097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.542130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.542400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.542432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:47.247 [2024-11-19 13:19:50.542637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.542671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:47.247 [2024-11-19 13:19:50.542910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.542944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.543213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.543246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:47.247 [2024-11-19 13:19:50.543368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.543401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:47.247 [2024-11-19 13:19:50.543613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.543645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.247 [2024-11-19 13:19:50.543909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.543942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.544143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.544174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.544350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.544382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.544653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.544686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.544928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.544971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.545102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.545134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.545330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.545363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.545625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.545656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.545924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.545966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.546237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.546268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.546470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.546502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.546669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.546701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.546972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.547006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.547181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.547215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.547425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.547457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.247 [2024-11-19 13:19:50.547722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.247 [2024-11-19 13:19:50.547759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.247 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.547890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.547921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.548164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.548196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.548392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.548423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.548568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.548600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.548858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.548889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.549194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.549227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.549347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.549379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.549629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.549663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.549792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.549823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.550065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.550097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.550279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.550312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.550548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.550581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.550864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.550897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.551033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.551067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.551289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.551322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.551525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.551556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.551760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.551792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.551922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.551963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.552154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.552186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.552388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.552420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.552619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.552651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.552842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.552874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.553162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.553195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.553384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.553417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.553691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.553723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.553903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.553935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.554183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.554221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.554459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.554491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.554741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.554773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.555010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.555043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.555230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.555262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.555498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.555530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.555709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.555741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.555882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.555913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.556099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.556133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.556323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.556354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.556651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.556683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.556819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.556850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.557026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.557059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.557324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.557357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.557484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.248 [2024-11-19 13:19:50.557517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.248 qpair failed and we were unable to recover it. 00:27:47.248 [2024-11-19 13:19:50.557722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.557753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.557945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.557987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.558173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.558204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.558385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.558418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.558706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.558738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.558983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.559015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.559153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.559184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.559319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.559352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.559612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.559644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.559907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.559939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.560147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.560179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.560368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.560401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.560582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.560614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.560734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.560766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.560966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.561000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.561191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.561225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.561371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.561403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.561686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.561719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.561903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.561936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.562142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.562175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.562349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.562380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.562516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.562549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.562668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.562699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.562828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.562861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.563040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.563073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.563309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.563341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ccba0 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.563474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.563509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.563854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.563886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.249 [2024-11-19 13:19:50.564192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.249 [2024-11-19 13:19:50.564225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.249 qpair failed and we were unable to recover it. 00:27:47.511 [2024-11-19 13:19:50.564476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.511 [2024-11-19 13:19:50.564509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.511 qpair failed and we were unable to recover it. 00:27:47.511 [2024-11-19 13:19:50.564770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.511 [2024-11-19 13:19:50.564803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.511 qpair failed and we were unable to recover it. 00:27:47.511 [2024-11-19 13:19:50.564940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.511 [2024-11-19 13:19:50.564985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.511 qpair failed and we were unable to recover it. 00:27:47.511 [2024-11-19 13:19:50.565149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.511 [2024-11-19 13:19:50.565182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.511 qpair failed and we were unable to recover it. 00:27:47.511 [2024-11-19 13:19:50.565372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.511 [2024-11-19 13:19:50.565403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.511 qpair failed and we were unable to recover it. 00:27:47.511 [2024-11-19 13:19:50.565589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.511 [2024-11-19 13:19:50.565621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.511 qpair failed and we were unable to recover it. 00:27:47.511 [2024-11-19 13:19:50.565815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.511 [2024-11-19 13:19:50.565847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.511 qpair failed and we were unable to recover it. 00:27:47.511 [2024-11-19 13:19:50.566049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.511 [2024-11-19 13:19:50.566083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.511 qpair failed and we were unable to recover it. 00:27:47.511 [2024-11-19 13:19:50.566267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.511 [2024-11-19 13:19:50.566299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.511 qpair failed and we were unable to recover it. 00:27:47.511 [2024-11-19 13:19:50.566481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.511 [2024-11-19 13:19:50.566513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.511 qpair failed and we were unable to recover it. 00:27:47.511 [2024-11-19 13:19:50.566637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.511 [2024-11-19 13:19:50.566674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.511 qpair failed and we were unable to recover it. 00:27:47.511 [2024-11-19 13:19:50.566847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.511 [2024-11-19 13:19:50.566878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.511 qpair failed and we were unable to recover it. 00:27:47.511 [2024-11-19 13:19:50.567069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.511 [2024-11-19 13:19:50.567103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.511 qpair failed and we were unable to recover it. 00:27:47.511 [2024-11-19 13:19:50.567225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.511 [2024-11-19 13:19:50.567256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.511 qpair failed and we were unable to recover it. 00:27:47.511 [2024-11-19 13:19:50.567426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.511 [2024-11-19 13:19:50.567456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.511 qpair failed and we were unable to recover it. 00:27:47.511 [2024-11-19 13:19:50.567646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.511 [2024-11-19 13:19:50.567678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.511 qpair failed and we were unable to recover it. 00:27:47.511 [2024-11-19 13:19:50.567867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.511 [2024-11-19 13:19:50.567898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.511 qpair failed and we were unable to recover it. 00:27:47.511 [2024-11-19 13:19:50.568130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.511 [2024-11-19 13:19:50.568162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.511 qpair failed and we were unable to recover it. 00:27:47.511 [2024-11-19 13:19:50.568404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.511 [2024-11-19 13:19:50.568437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.511 qpair failed and we were unable to recover it. 00:27:47.511 [2024-11-19 13:19:50.568642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.511 [2024-11-19 13:19:50.568674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.511 qpair failed and we were unable to recover it. 00:27:47.511 [2024-11-19 13:19:50.568807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.511 [2024-11-19 13:19:50.568840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.511 qpair failed and we were unable to recover it. 00:27:47.511 [2024-11-19 13:19:50.569094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.511 [2024-11-19 13:19:50.569126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.511 qpair failed and we were unable to recover it. 00:27:47.511 [2024-11-19 13:19:50.569257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.511 [2024-11-19 13:19:50.569289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.511 qpair failed and we were unable to recover it. 00:27:47.511 [2024-11-19 13:19:50.569530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.569561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.569810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.569842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.570072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.570106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.570244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.570276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.570418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.570448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.570664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.570696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.570940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.570980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.571213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.571245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.571434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.571465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.571813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.571845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.572028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.572060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.572279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.572311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.572503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.572534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.572836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.572866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.573122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.573163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.573316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.573348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.573538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.573570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.573759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.573791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.574078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.574112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.574234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.574266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.574491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.574521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.574659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.574691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.574958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.574991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.575183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.575215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.575360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.575391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.575573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.575603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.575864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.575895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.576152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.576184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.576339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.576371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.576504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.576535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.576775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.576807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.577075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.577108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.577387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.577419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.577617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.577649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.577858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.577888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.578198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.578230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.578503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.578534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.512 qpair failed and we were unable to recover it. 00:27:47.512 [2024-11-19 13:19:50.578666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.512 [2024-11-19 13:19:50.578698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.513 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.579009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.579044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:47.513 [2024-11-19 13:19:50.579235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.579267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.513 [2024-11-19 13:19:50.579464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.579497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.513 [2024-11-19 13:19:50.579782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.579815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.580089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.580122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.580264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.580295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.580421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.580453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.580745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.580777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.581100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.581132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.581372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.581403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.581607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.581639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.581828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.581860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.582051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.582084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.582265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.582296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.582438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.582476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.582692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.582724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.582999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.583032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.583222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.583254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.583396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.583428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.583629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.583660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.583832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.583863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.584046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.584078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.584219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.584251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.584420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.584451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.584712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.584743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.584915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.584955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.585131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.585163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.585369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.585400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.585673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.585705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.585943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.585985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.586227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.586258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.586499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.586530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.586714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.586745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.513 [2024-11-19 13:19:50.587010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.513 [2024-11-19 13:19:50.587043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.513 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.587284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.587315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.587588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.587619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.587792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.587823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.588088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.588120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.588315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.588346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.588636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.588667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.588847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.588879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.589074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.589107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.589301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.589333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.589521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.589552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.589731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.589762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.589978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.590011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.590163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.590195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.590434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.590466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.590747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.590778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.590974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.591007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.591181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.591212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.591422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.591454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.591578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.591609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.591733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.591764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.591933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.591978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.592122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.592153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.592420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.592451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.592660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.592690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.592870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.592901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.593096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.593128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.593343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.593374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.593511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.593543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.593670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.593701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.593896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.593927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.594144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.594176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.594364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.594395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.594664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.594695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.594978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.595011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.595209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.595241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.595410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.514 [2024-11-19 13:19:50.595441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.514 qpair failed and we were unable to recover it. 00:27:47.514 [2024-11-19 13:19:50.595634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.595665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.595921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.595959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.596140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.596171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.596406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.596438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.596739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.596770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.596945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.596986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.597161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.597192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.597306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.597337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.597472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.597504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.597765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.597796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.598082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.598115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.598335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.598367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.598471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.598502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.598748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.598779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.598968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.599001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.599191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.599222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.599394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.599425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.599664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.599694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.599938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.599980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.600121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.600152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.600386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.600416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.600610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.600641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.600887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.600918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f019c000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.601120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.601155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.601447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.601485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.601748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.601779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.601959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.601993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.602167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.602199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.602335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.602366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.602546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.602578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.602763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.602795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.603082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.603115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.603354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.603386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.603525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.603557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.603698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.603730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.603971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.515 [2024-11-19 13:19:50.604003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.515 qpair failed and we were unable to recover it. 00:27:47.515 [2024-11-19 13:19:50.604312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.604344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.604549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.604580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.604824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.604857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.605067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.605100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.605395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.605427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.605635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.605668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.605839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.605871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.606139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.606172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.606410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.606443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.606707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.606741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.606961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.606995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.607138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.607170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.607353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.607384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.607647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.607678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.607891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.607924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.608181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.608214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.608395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.608429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.608639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.608674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.608821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.608853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.609053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.609086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.609326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.609358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.609541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.609573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.609849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.609881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.610006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.610040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.610252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.610284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.610428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.610459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.610731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.610763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.610905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.610936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.611135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.611174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.611411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.611443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.611749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.611781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 Malloc0 00:27:47.516 [2024-11-19 13:19:50.612080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.612113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 [2024-11-19 13:19:50.612374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.612405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.516 qpair failed and we were unable to recover it. 00:27:47.516 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.516 [2024-11-19 13:19:50.612531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.516 [2024-11-19 13:19:50.612563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.612816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.612850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:47.517 [2024-11-19 13:19:50.613079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.613112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.517 [2024-11-19 13:19:50.613241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.613274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.613384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.613415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.517 [2024-11-19 13:19:50.613601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.613632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.613898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.613929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.614211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.614243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.614481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.614512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.614776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.614806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.615012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.615045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.615237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.615268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.615492] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.517 [2024-11-19 13:19:50.615522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.615552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.615791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.615823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.616034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.616066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.616264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.616296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.616486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.616517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.616781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.616812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.617001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.617033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.617207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.617238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.617459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.617491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.617697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.617728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.617993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.618026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.618205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.618237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.618424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.618455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.618666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.618698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.618873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.618904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.619101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.619134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.619340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.619371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.619560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.619592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.619763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.619794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.619978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.620009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.620150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.620181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.620422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.620459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.620763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.620794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.621070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.621102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.517 [2024-11-19 13:19:50.621381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.517 [2024-11-19 13:19:50.621413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.517 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.621693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.621724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.621932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.621972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.622165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.622196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.622400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.622431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.622560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.622591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.622723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.622755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.622941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.622982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.623238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.623270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.623402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.623433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.623687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.623718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.623984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.624017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b9 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.518 0 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.624234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.624267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:47.518 [2024-11-19 13:19:50.624456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.624489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.518 [2024-11-19 13:19:50.624684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.624715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.518 [2024-11-19 13:19:50.624970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.625003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.625215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.625247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.625427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.625458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.625693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.625724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.625927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.625985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.626242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.626274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.626491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.626522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.626737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.626769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.627015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.627047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.627248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.627280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.627456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.627489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.627677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.627707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.627898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.627930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.628114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.628146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.628410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.628442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.628716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.628748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.629008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.629041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.629341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.629372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.629663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.629694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.629968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.630000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.630144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.630181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.518 qpair failed and we were unable to recover it. 00:27:47.518 [2024-11-19 13:19:50.630443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.518 [2024-11-19 13:19:50.630474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.630647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.630678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.630869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.630900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.631178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.631210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.631508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.631539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.631758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.631789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.519 [2024-11-19 13:19:50.632074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.632107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.632376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:47.519 [2024-11-19 13:19:50.632409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.519 [2024-11-19 13:19:50.632699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.632731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.632910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.632943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.519 [2024-11-19 13:19:50.633194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.633225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.633415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.633447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.633624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.633656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.633839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.633870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.634128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.634161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.634337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.634368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.634603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.634635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.634822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.634853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.635113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.635146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.635385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.635417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.635714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.635745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.635983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.636016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.636196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.636227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.636343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.636374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.636500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.636538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.636775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.636807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.637000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.637032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.637269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.637301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.637487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.637518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.637769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.637800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.637935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.637977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.638150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.638183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.638318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.638349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.638490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.638521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.519 [2024-11-19 13:19:50.638712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.519 [2024-11-19 13:19:50.638744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.519 qpair failed and we were unable to recover it. 00:27:47.520 [2024-11-19 13:19:50.638931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-11-19 13:19:50.638973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.520 qpair failed and we were unable to recover it. 00:27:47.520 [2024-11-19 13:19:50.639147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-11-19 13:19:50.639178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.520 qpair failed and we were unable to recover it. 00:27:47.520 [2024-11-19 13:19:50.639362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-11-19 13:19:50.639392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.520 qpair failed and we were unable to recover it. 00:27:47.520 [2024-11-19 13:19:50.639575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-11-19 13:19:50.639606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.520 qpair failed and we were unable to recover it. 00:27:47.520 [2024-11-19 13:19:50.639788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-11-19 13:19:50.639819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.520 qpair failed and we were unable to recover it. 00:27:47.520 [2024-11-19 13:19:50.639974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-11-19 13:19:50.640006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.520 qpair failed and we were unable to recover it. 00:27:47.520 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.520 [2024-11-19 13:19:50.640139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-11-19 13:19:50.640171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.520 qpair failed and we were unable to recover it. 00:27:47.520 [2024-11-19 13:19:50.640351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-11-19 13:19:50.640383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.520 qpair failed and we were unable to recover it. 00:27:47.520 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:47.520 [2024-11-19 13:19:50.640561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-11-19 13:19:50.640592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.520 qpair failed and we were unable to recover it. 00:27:47.520 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.520 [2024-11-19 13:19:50.640827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-11-19 13:19:50.640858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.520 qpair failed and we were unable to recover it. 00:27:47.520 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.520 [2024-11-19 13:19:50.641042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-11-19 13:19:50.641076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.520 qpair failed and we were unable to recover it. 00:27:47.520 [2024-11-19 13:19:50.641200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-11-19 13:19:50.641232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.520 qpair failed and we were unable to recover it. 00:27:47.520 [2024-11-19 13:19:50.641484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-11-19 13:19:50.641515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.520 qpair failed and we were unable to recover it. 00:27:47.520 [2024-11-19 13:19:50.641775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-11-19 13:19:50.641807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.520 qpair failed and we were unable to recover it. 00:27:47.520 [2024-11-19 13:19:50.641920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-11-19 13:19:50.641977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.520 qpair failed and we were unable to recover it. 00:27:47.520 [2024-11-19 13:19:50.642216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-11-19 13:19:50.642247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.520 qpair failed and we were unable to recover it. 00:27:47.520 [2024-11-19 13:19:50.642430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-11-19 13:19:50.642461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.520 qpair failed and we were unable to recover it. 00:27:47.520 [2024-11-19 13:19:50.642666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-11-19 13:19:50.642697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.520 qpair failed and we were unable to recover it. 00:27:47.520 [2024-11-19 13:19:50.642958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-11-19 13:19:50.642990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.520 qpair failed and we were unable to recover it. 00:27:47.520 [2024-11-19 13:19:50.643115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-11-19 13:19:50.643147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.520 qpair failed and we were unable to recover it. 00:27:47.520 [2024-11-19 13:19:50.643335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-11-19 13:19:50.643367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.520 qpair failed and we were unable to recover it. 00:27:47.520 [2024-11-19 13:19:50.643496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.520 [2024-11-19 13:19:50.643527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0198000b90 with addr=10.0.0.2, port=4420 00:27:47.520 qpair failed and we were unable to recover it. 00:27:47.520 [2024-11-19 13:19:50.643686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.520 [2024-11-19 13:19:50.646163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.520 [2024-11-19 13:19:50.646275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.520 [2024-11-19 13:19:50.646320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.520 [2024-11-19 13:19:50.646344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.520 [2024-11-19 13:19:50.646367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.520 [2024-11-19 13:19:50.646419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.520 qpair failed and we were unable to recover it. 00:27:47.520 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.520 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:47.520 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.520 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.520 [2024-11-19 13:19:50.656130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.520 [2024-11-19 13:19:50.656214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.520 [2024-11-19 13:19:50.656247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.520 [2024-11-19 13:19:50.656264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.520 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.520 [2024-11-19 13:19:50.656280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.520 [2024-11-19 13:19:50.656319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.520 qpair failed and we were unable to recover it. 00:27:47.520 13:19:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3004375 00:27:47.520 [2024-11-19 13:19:50.666117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.521 [2024-11-19 13:19:50.666186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.521 [2024-11-19 13:19:50.666209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.521 [2024-11-19 13:19:50.666221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.521 [2024-11-19 13:19:50.666232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.521 [2024-11-19 13:19:50.666256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.521 qpair failed and we were unable to recover it. 00:27:47.521 [2024-11-19 13:19:50.676103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.521 [2024-11-19 13:19:50.676185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.521 [2024-11-19 13:19:50.676201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.521 [2024-11-19 13:19:50.676209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.521 [2024-11-19 13:19:50.676216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.521 [2024-11-19 13:19:50.676234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.521 qpair failed and we were unable to recover it. 00:27:47.521 [2024-11-19 13:19:50.686082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.521 [2024-11-19 13:19:50.686142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.521 [2024-11-19 13:19:50.686156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.521 [2024-11-19 13:19:50.686163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.521 [2024-11-19 13:19:50.686169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.521 [2024-11-19 13:19:50.686183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.521 qpair failed and we were unable to recover it. 00:27:47.521 [2024-11-19 13:19:50.696110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.521 [2024-11-19 13:19:50.696215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.521 [2024-11-19 13:19:50.696229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.521 [2024-11-19 13:19:50.696236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.521 [2024-11-19 13:19:50.696242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.521 [2024-11-19 13:19:50.696257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.521 qpair failed and we were unable to recover it. 00:27:47.521 [2024-11-19 13:19:50.706039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.521 [2024-11-19 13:19:50.706089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.521 [2024-11-19 13:19:50.706104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.521 [2024-11-19 13:19:50.706110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.521 [2024-11-19 13:19:50.706117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.521 [2024-11-19 13:19:50.706132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.521 qpair failed and we were unable to recover it. 00:27:47.521 [2024-11-19 13:19:50.716153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.521 [2024-11-19 13:19:50.716211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.521 [2024-11-19 13:19:50.716225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.521 [2024-11-19 13:19:50.716232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.521 [2024-11-19 13:19:50.716238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.521 [2024-11-19 13:19:50.716252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.521 qpair failed and we were unable to recover it. 00:27:47.521 [2024-11-19 13:19:50.726214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.521 [2024-11-19 13:19:50.726272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.521 [2024-11-19 13:19:50.726286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.521 [2024-11-19 13:19:50.726292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.521 [2024-11-19 13:19:50.726299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.521 [2024-11-19 13:19:50.726313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.521 qpair failed and we were unable to recover it. 00:27:47.521 [2024-11-19 13:19:50.736220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.521 [2024-11-19 13:19:50.736289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.521 [2024-11-19 13:19:50.736306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.521 [2024-11-19 13:19:50.736313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.521 [2024-11-19 13:19:50.736319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.521 [2024-11-19 13:19:50.736333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.521 qpair failed and we were unable to recover it. 00:27:47.521 [2024-11-19 13:19:50.746246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.521 [2024-11-19 13:19:50.746297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.521 [2024-11-19 13:19:50.746310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.521 [2024-11-19 13:19:50.746317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.521 [2024-11-19 13:19:50.746323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.521 [2024-11-19 13:19:50.746337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.521 qpair failed and we were unable to recover it. 00:27:47.521 [2024-11-19 13:19:50.756258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.521 [2024-11-19 13:19:50.756315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.521 [2024-11-19 13:19:50.756328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.521 [2024-11-19 13:19:50.756334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.521 [2024-11-19 13:19:50.756341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.521 [2024-11-19 13:19:50.756355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.521 qpair failed and we were unable to recover it. 00:27:47.521 [2024-11-19 13:19:50.766300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.521 [2024-11-19 13:19:50.766355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.521 [2024-11-19 13:19:50.766368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.521 [2024-11-19 13:19:50.766375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.521 [2024-11-19 13:19:50.766381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.521 [2024-11-19 13:19:50.766395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.521 qpair failed and we were unable to recover it. 00:27:47.521 [2024-11-19 13:19:50.776304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.521 [2024-11-19 13:19:50.776358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.521 [2024-11-19 13:19:50.776371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.521 [2024-11-19 13:19:50.776378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.521 [2024-11-19 13:19:50.776387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.521 [2024-11-19 13:19:50.776401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.521 qpair failed and we were unable to recover it. 00:27:47.521 [2024-11-19 13:19:50.786344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.521 [2024-11-19 13:19:50.786402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.521 [2024-11-19 13:19:50.786416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.521 [2024-11-19 13:19:50.786423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.521 [2024-11-19 13:19:50.786429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.521 [2024-11-19 13:19:50.786444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.521 qpair failed and we were unable to recover it. 00:27:47.521 [2024-11-19 13:19:50.796384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.521 [2024-11-19 13:19:50.796440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.522 [2024-11-19 13:19:50.796454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.522 [2024-11-19 13:19:50.796461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.522 [2024-11-19 13:19:50.796467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.522 [2024-11-19 13:19:50.796481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.522 qpair failed and we were unable to recover it. 00:27:47.522 [2024-11-19 13:19:50.806337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.522 [2024-11-19 13:19:50.806392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.522 [2024-11-19 13:19:50.806406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.522 [2024-11-19 13:19:50.806412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.522 [2024-11-19 13:19:50.806419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.522 [2024-11-19 13:19:50.806433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.522 qpair failed and we were unable to recover it. 00:27:47.522 [2024-11-19 13:19:50.816434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.522 [2024-11-19 13:19:50.816490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.522 [2024-11-19 13:19:50.816504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.522 [2024-11-19 13:19:50.816511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.522 [2024-11-19 13:19:50.816517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.522 [2024-11-19 13:19:50.816531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.522 qpair failed and we were unable to recover it. 00:27:47.522 [2024-11-19 13:19:50.826454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.522 [2024-11-19 13:19:50.826507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.522 [2024-11-19 13:19:50.826521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.522 [2024-11-19 13:19:50.826527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.522 [2024-11-19 13:19:50.826533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.522 [2024-11-19 13:19:50.826547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.522 qpair failed and we were unable to recover it. 00:27:47.522 [2024-11-19 13:19:50.836481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.522 [2024-11-19 13:19:50.836540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.522 [2024-11-19 13:19:50.836554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.522 [2024-11-19 13:19:50.836561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.522 [2024-11-19 13:19:50.836567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.522 [2024-11-19 13:19:50.836581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.522 qpair failed and we were unable to recover it. 00:27:47.522 [2024-11-19 13:19:50.846542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.522 [2024-11-19 13:19:50.846596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.522 [2024-11-19 13:19:50.846610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.522 [2024-11-19 13:19:50.846616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.522 [2024-11-19 13:19:50.846622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.522 [2024-11-19 13:19:50.846636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.522 qpair failed and we were unable to recover it. 00:27:47.522 [2024-11-19 13:19:50.856462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.522 [2024-11-19 13:19:50.856517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.522 [2024-11-19 13:19:50.856530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.522 [2024-11-19 13:19:50.856537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.522 [2024-11-19 13:19:50.856543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.522 [2024-11-19 13:19:50.856558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.522 qpair failed and we were unable to recover it. 00:27:47.522 [2024-11-19 13:19:50.866554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.522 [2024-11-19 13:19:50.866618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.522 [2024-11-19 13:19:50.866634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.522 [2024-11-19 13:19:50.866641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.522 [2024-11-19 13:19:50.866647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.522 [2024-11-19 13:19:50.866662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.522 qpair failed and we were unable to recover it. 00:27:47.522 [2024-11-19 13:19:50.876594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.522 [2024-11-19 13:19:50.876650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.522 [2024-11-19 13:19:50.876663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.522 [2024-11-19 13:19:50.876670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.522 [2024-11-19 13:19:50.876676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.522 [2024-11-19 13:19:50.876690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.522 qpair failed and we were unable to recover it. 00:27:47.782 [2024-11-19 13:19:50.886626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.782 [2024-11-19 13:19:50.886681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.782 [2024-11-19 13:19:50.886695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.782 [2024-11-19 13:19:50.886702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.782 [2024-11-19 13:19:50.886708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.782 [2024-11-19 13:19:50.886723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.782 qpair failed and we were unable to recover it. 00:27:47.782 [2024-11-19 13:19:50.896670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.782 [2024-11-19 13:19:50.896728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.782 [2024-11-19 13:19:50.896742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.782 [2024-11-19 13:19:50.896749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.782 [2024-11-19 13:19:50.896754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.782 [2024-11-19 13:19:50.896769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.782 qpair failed and we were unable to recover it. 00:27:47.782 [2024-11-19 13:19:50.906682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.782 [2024-11-19 13:19:50.906735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.782 [2024-11-19 13:19:50.906748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.782 [2024-11-19 13:19:50.906755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.782 [2024-11-19 13:19:50.906765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.782 [2024-11-19 13:19:50.906780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.782 qpair failed and we were unable to recover it. 00:27:47.782 [2024-11-19 13:19:50.916754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.782 [2024-11-19 13:19:50.916861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.782 [2024-11-19 13:19:50.916875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.782 [2024-11-19 13:19:50.916881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.782 [2024-11-19 13:19:50.916888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.782 [2024-11-19 13:19:50.916902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.782 qpair failed and we were unable to recover it. 00:27:47.782 [2024-11-19 13:19:50.926718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.782 [2024-11-19 13:19:50.926776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.782 [2024-11-19 13:19:50.926789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.782 [2024-11-19 13:19:50.926796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.782 [2024-11-19 13:19:50.926802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.782 [2024-11-19 13:19:50.926817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.782 qpair failed and we were unable to recover it. 00:27:47.782 [2024-11-19 13:19:50.936758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.782 [2024-11-19 13:19:50.936811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.782 [2024-11-19 13:19:50.936824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.782 [2024-11-19 13:19:50.936831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.782 [2024-11-19 13:19:50.936837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.782 [2024-11-19 13:19:50.936852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.782 qpair failed and we were unable to recover it. 00:27:47.782 [2024-11-19 13:19:50.946793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.782 [2024-11-19 13:19:50.946897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.782 [2024-11-19 13:19:50.946911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.782 [2024-11-19 13:19:50.946918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.783 [2024-11-19 13:19:50.946924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.783 [2024-11-19 13:19:50.946939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.783 qpair failed and we were unable to recover it. 00:27:47.783 [2024-11-19 13:19:50.956823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.783 [2024-11-19 13:19:50.956877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.783 [2024-11-19 13:19:50.956891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.783 [2024-11-19 13:19:50.956897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.783 [2024-11-19 13:19:50.956903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.783 [2024-11-19 13:19:50.956917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.783 qpair failed and we were unable to recover it. 00:27:47.783 [2024-11-19 13:19:50.966854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.783 [2024-11-19 13:19:50.966905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.783 [2024-11-19 13:19:50.966919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.783 [2024-11-19 13:19:50.966925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.783 [2024-11-19 13:19:50.966931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.783 [2024-11-19 13:19:50.966945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.783 qpair failed and we were unable to recover it. 00:27:47.783 [2024-11-19 13:19:50.976880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.783 [2024-11-19 13:19:50.976960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.783 [2024-11-19 13:19:50.976973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.783 [2024-11-19 13:19:50.976980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.783 [2024-11-19 13:19:50.976986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.783 [2024-11-19 13:19:50.977001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.783 qpair failed and we were unable to recover it. 00:27:47.783 [2024-11-19 13:19:50.986918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.783 [2024-11-19 13:19:50.986992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.783 [2024-11-19 13:19:50.987007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.783 [2024-11-19 13:19:50.987013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.783 [2024-11-19 13:19:50.987020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.783 [2024-11-19 13:19:50.987034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.783 qpair failed and we were unable to recover it. 00:27:47.783 [2024-11-19 13:19:50.996941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.783 [2024-11-19 13:19:50.997007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.783 [2024-11-19 13:19:50.997021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.783 [2024-11-19 13:19:50.997028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.783 [2024-11-19 13:19:50.997034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.783 [2024-11-19 13:19:50.997049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.783 qpair failed and we were unable to recover it. 00:27:47.783 [2024-11-19 13:19:51.006966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.783 [2024-11-19 13:19:51.007024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.783 [2024-11-19 13:19:51.007038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.783 [2024-11-19 13:19:51.007044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.783 [2024-11-19 13:19:51.007051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.783 [2024-11-19 13:19:51.007066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.783 qpair failed and we were unable to recover it. 00:27:47.783 [2024-11-19 13:19:51.017003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.783 [2024-11-19 13:19:51.017055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.783 [2024-11-19 13:19:51.017069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.783 [2024-11-19 13:19:51.017075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.783 [2024-11-19 13:19:51.017081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.783 [2024-11-19 13:19:51.017096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.783 qpair failed and we were unable to recover it. 00:27:47.783 [2024-11-19 13:19:51.027014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.783 [2024-11-19 13:19:51.027066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.783 [2024-11-19 13:19:51.027079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.783 [2024-11-19 13:19:51.027086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.783 [2024-11-19 13:19:51.027092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.783 [2024-11-19 13:19:51.027106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.783 qpair failed and we were unable to recover it. 00:27:47.783 [2024-11-19 13:19:51.037061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.783 [2024-11-19 13:19:51.037118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.783 [2024-11-19 13:19:51.037131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.783 [2024-11-19 13:19:51.037142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.783 [2024-11-19 13:19:51.037148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.783 [2024-11-19 13:19:51.037162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.783 qpair failed and we were unable to recover it. 00:27:47.783 [2024-11-19 13:19:51.047085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.783 [2024-11-19 13:19:51.047140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.783 [2024-11-19 13:19:51.047153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.783 [2024-11-19 13:19:51.047160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.783 [2024-11-19 13:19:51.047166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.783 [2024-11-19 13:19:51.047180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.783 qpair failed and we were unable to recover it. 00:27:47.783 [2024-11-19 13:19:51.057104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.783 [2024-11-19 13:19:51.057158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.783 [2024-11-19 13:19:51.057172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.783 [2024-11-19 13:19:51.057179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.783 [2024-11-19 13:19:51.057185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.783 [2024-11-19 13:19:51.057200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.783 qpair failed and we were unable to recover it. 00:27:47.783 [2024-11-19 13:19:51.067131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.783 [2024-11-19 13:19:51.067180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.783 [2024-11-19 13:19:51.067194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.783 [2024-11-19 13:19:51.067200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.783 [2024-11-19 13:19:51.067206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.783 [2024-11-19 13:19:51.067221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.783 qpair failed and we were unable to recover it. 00:27:47.783 [2024-11-19 13:19:51.077194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.783 [2024-11-19 13:19:51.077251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.783 [2024-11-19 13:19:51.077264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.783 [2024-11-19 13:19:51.077271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.783 [2024-11-19 13:19:51.077277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.784 [2024-11-19 13:19:51.077295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.784 qpair failed and we were unable to recover it. 00:27:47.784 [2024-11-19 13:19:51.087194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.784 [2024-11-19 13:19:51.087249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.784 [2024-11-19 13:19:51.087263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.784 [2024-11-19 13:19:51.087270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.784 [2024-11-19 13:19:51.087276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.784 [2024-11-19 13:19:51.087290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.784 qpair failed and we were unable to recover it. 00:27:47.784 [2024-11-19 13:19:51.097227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.784 [2024-11-19 13:19:51.097293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.784 [2024-11-19 13:19:51.097306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.784 [2024-11-19 13:19:51.097313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.784 [2024-11-19 13:19:51.097319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.784 [2024-11-19 13:19:51.097334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.784 qpair failed and we were unable to recover it. 00:27:47.784 [2024-11-19 13:19:51.107291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.784 [2024-11-19 13:19:51.107352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.784 [2024-11-19 13:19:51.107365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.784 [2024-11-19 13:19:51.107372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.784 [2024-11-19 13:19:51.107377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.784 [2024-11-19 13:19:51.107392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.784 qpair failed and we were unable to recover it. 00:27:47.784 [2024-11-19 13:19:51.117309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.784 [2024-11-19 13:19:51.117414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.784 [2024-11-19 13:19:51.117427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.784 [2024-11-19 13:19:51.117433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.784 [2024-11-19 13:19:51.117440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.784 [2024-11-19 13:19:51.117454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.784 qpair failed and we were unable to recover it. 00:27:47.784 [2024-11-19 13:19:51.127296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.784 [2024-11-19 13:19:51.127351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.784 [2024-11-19 13:19:51.127366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.784 [2024-11-19 13:19:51.127372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.784 [2024-11-19 13:19:51.127378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.784 [2024-11-19 13:19:51.127392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.784 qpair failed and we were unable to recover it. 00:27:47.784 [2024-11-19 13:19:51.137322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.784 [2024-11-19 13:19:51.137375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.784 [2024-11-19 13:19:51.137389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.784 [2024-11-19 13:19:51.137395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.784 [2024-11-19 13:19:51.137401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.784 [2024-11-19 13:19:51.137414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.784 qpair failed and we were unable to recover it. 00:27:47.784 [2024-11-19 13:19:51.147344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.784 [2024-11-19 13:19:51.147393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.784 [2024-11-19 13:19:51.147407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.784 [2024-11-19 13:19:51.147413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.784 [2024-11-19 13:19:51.147419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:47.784 [2024-11-19 13:19:51.147433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:47.784 qpair failed and we were unable to recover it. 00:27:48.044 [2024-11-19 13:19:51.157346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.044 [2024-11-19 13:19:51.157429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.044 [2024-11-19 13:19:51.157443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.044 [2024-11-19 13:19:51.157450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.044 [2024-11-19 13:19:51.157456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.044 [2024-11-19 13:19:51.157471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.044 qpair failed and we were unable to recover it. 00:27:48.044 [2024-11-19 13:19:51.167407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.044 [2024-11-19 13:19:51.167462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.044 [2024-11-19 13:19:51.167475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.044 [2024-11-19 13:19:51.167486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.044 [2024-11-19 13:19:51.167492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.044 [2024-11-19 13:19:51.167506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.044 qpair failed and we were unable to recover it. 00:27:48.044 [2024-11-19 13:19:51.177365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.044 [2024-11-19 13:19:51.177420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.044 [2024-11-19 13:19:51.177434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.044 [2024-11-19 13:19:51.177441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.044 [2024-11-19 13:19:51.177446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.044 [2024-11-19 13:19:51.177461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.044 qpair failed and we were unable to recover it. 00:27:48.044 [2024-11-19 13:19:51.187463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.044 [2024-11-19 13:19:51.187523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.044 [2024-11-19 13:19:51.187536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.044 [2024-11-19 13:19:51.187543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.044 [2024-11-19 13:19:51.187549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.044 [2024-11-19 13:19:51.187564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.044 qpair failed and we were unable to recover it. 00:27:48.044 [2024-11-19 13:19:51.197513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.044 [2024-11-19 13:19:51.197577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.044 [2024-11-19 13:19:51.197591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.044 [2024-11-19 13:19:51.197598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.044 [2024-11-19 13:19:51.197604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.044 [2024-11-19 13:19:51.197618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.044 qpair failed and we were unable to recover it. 00:27:48.044 [2024-11-19 13:19:51.207460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.044 [2024-11-19 13:19:51.207561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.044 [2024-11-19 13:19:51.207575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.044 [2024-11-19 13:19:51.207582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.044 [2024-11-19 13:19:51.207588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.044 [2024-11-19 13:19:51.207607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.044 qpair failed and we were unable to recover it. 00:27:48.044 [2024-11-19 13:19:51.217565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.044 [2024-11-19 13:19:51.217625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.044 [2024-11-19 13:19:51.217639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.044 [2024-11-19 13:19:51.217646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.044 [2024-11-19 13:19:51.217651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.044 [2024-11-19 13:19:51.217666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.044 qpair failed and we were unable to recover it. 00:27:48.044 [2024-11-19 13:19:51.227591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.044 [2024-11-19 13:19:51.227649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.044 [2024-11-19 13:19:51.227662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.044 [2024-11-19 13:19:51.227669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.044 [2024-11-19 13:19:51.227675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.044 [2024-11-19 13:19:51.227689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.044 qpair failed and we were unable to recover it. 00:27:48.044 [2024-11-19 13:19:51.237621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.044 [2024-11-19 13:19:51.237679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.044 [2024-11-19 13:19:51.237693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.044 [2024-11-19 13:19:51.237699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.044 [2024-11-19 13:19:51.237706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.044 [2024-11-19 13:19:51.237720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.044 qpair failed and we were unable to recover it. 00:27:48.044 [2024-11-19 13:19:51.247640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.044 [2024-11-19 13:19:51.247699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.044 [2024-11-19 13:19:51.247714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.044 [2024-11-19 13:19:51.247720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.044 [2024-11-19 13:19:51.247726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.045 [2024-11-19 13:19:51.247742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.045 qpair failed and we were unable to recover it. 00:27:48.045 [2024-11-19 13:19:51.257650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.045 [2024-11-19 13:19:51.257706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.045 [2024-11-19 13:19:51.257719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.045 [2024-11-19 13:19:51.257726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.045 [2024-11-19 13:19:51.257731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.045 [2024-11-19 13:19:51.257746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.045 qpair failed and we were unable to recover it. 00:27:48.045 [2024-11-19 13:19:51.267701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.045 [2024-11-19 13:19:51.267755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.045 [2024-11-19 13:19:51.267771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.045 [2024-11-19 13:19:51.267778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.045 [2024-11-19 13:19:51.267784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.045 [2024-11-19 13:19:51.267798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.045 qpair failed and we were unable to recover it. 00:27:48.045 [2024-11-19 13:19:51.277732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.045 [2024-11-19 13:19:51.277786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.045 [2024-11-19 13:19:51.277800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.045 [2024-11-19 13:19:51.277806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.045 [2024-11-19 13:19:51.277812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.045 [2024-11-19 13:19:51.277826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.045 qpair failed and we were unable to recover it. 00:27:48.045 [2024-11-19 13:19:51.287757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.045 [2024-11-19 13:19:51.287833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.045 [2024-11-19 13:19:51.287846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.045 [2024-11-19 13:19:51.287853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.045 [2024-11-19 13:19:51.287859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.045 [2024-11-19 13:19:51.287873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.045 qpair failed and we were unable to recover it. 00:27:48.045 [2024-11-19 13:19:51.297779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.045 [2024-11-19 13:19:51.297836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.045 [2024-11-19 13:19:51.297852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.045 [2024-11-19 13:19:51.297859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.045 [2024-11-19 13:19:51.297865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.045 [2024-11-19 13:19:51.297881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.045 qpair failed and we were unable to recover it. 00:27:48.045 [2024-11-19 13:19:51.307741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.045 [2024-11-19 13:19:51.307797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.045 [2024-11-19 13:19:51.307813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.045 [2024-11-19 13:19:51.307821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.045 [2024-11-19 13:19:51.307827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.045 [2024-11-19 13:19:51.307842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.045 qpair failed and we were unable to recover it. 00:27:48.045 [2024-11-19 13:19:51.317847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.045 [2024-11-19 13:19:51.317906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.045 [2024-11-19 13:19:51.317920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.045 [2024-11-19 13:19:51.317927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.045 [2024-11-19 13:19:51.317933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.045 [2024-11-19 13:19:51.317952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.045 qpair failed and we were unable to recover it. 00:27:48.045 [2024-11-19 13:19:51.327870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.045 [2024-11-19 13:19:51.327929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.045 [2024-11-19 13:19:51.327945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.045 [2024-11-19 13:19:51.327956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.045 [2024-11-19 13:19:51.327963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.045 [2024-11-19 13:19:51.327977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.045 qpair failed and we were unable to recover it. 00:27:48.045 [2024-11-19 13:19:51.337824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.045 [2024-11-19 13:19:51.337882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.045 [2024-11-19 13:19:51.337895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.045 [2024-11-19 13:19:51.337902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.045 [2024-11-19 13:19:51.337911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.045 [2024-11-19 13:19:51.337927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.045 qpair failed and we were unable to recover it. 00:27:48.045 [2024-11-19 13:19:51.347928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.045 [2024-11-19 13:19:51.347980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.045 [2024-11-19 13:19:51.347994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.045 [2024-11-19 13:19:51.348001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.045 [2024-11-19 13:19:51.348007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.045 [2024-11-19 13:19:51.348022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.045 qpair failed and we were unable to recover it. 00:27:48.045 [2024-11-19 13:19:51.357880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.045 [2024-11-19 13:19:51.357935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.045 [2024-11-19 13:19:51.357952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.045 [2024-11-19 13:19:51.357959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.045 [2024-11-19 13:19:51.357964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.045 [2024-11-19 13:19:51.357979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.045 qpair failed and we were unable to recover it. 00:27:48.045 [2024-11-19 13:19:51.367987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.045 [2024-11-19 13:19:51.368040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.045 [2024-11-19 13:19:51.368053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.045 [2024-11-19 13:19:51.368060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.045 [2024-11-19 13:19:51.368066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.045 [2024-11-19 13:19:51.368080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.045 qpair failed and we were unable to recover it. 00:27:48.045 [2024-11-19 13:19:51.378011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.045 [2024-11-19 13:19:51.378066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.045 [2024-11-19 13:19:51.378080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.045 [2024-11-19 13:19:51.378087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.045 [2024-11-19 13:19:51.378093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.046 [2024-11-19 13:19:51.378108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.046 qpair failed and we were unable to recover it. 00:27:48.046 [2024-11-19 13:19:51.388071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.046 [2024-11-19 13:19:51.388121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.046 [2024-11-19 13:19:51.388135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.046 [2024-11-19 13:19:51.388141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.046 [2024-11-19 13:19:51.388147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.046 [2024-11-19 13:19:51.388162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.046 qpair failed and we were unable to recover it. 00:27:48.046 [2024-11-19 13:19:51.398076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.046 [2024-11-19 13:19:51.398137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.046 [2024-11-19 13:19:51.398151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.046 [2024-11-19 13:19:51.398157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.046 [2024-11-19 13:19:51.398163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.046 [2024-11-19 13:19:51.398178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.046 qpair failed and we were unable to recover it. 00:27:48.046 [2024-11-19 13:19:51.408108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.046 [2024-11-19 13:19:51.408158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.046 [2024-11-19 13:19:51.408172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.046 [2024-11-19 13:19:51.408179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.046 [2024-11-19 13:19:51.408184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.046 [2024-11-19 13:19:51.408199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.046 qpair failed and we were unable to recover it. 00:27:48.046 [2024-11-19 13:19:51.418133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.046 [2024-11-19 13:19:51.418182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.046 [2024-11-19 13:19:51.418196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.046 [2024-11-19 13:19:51.418202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.046 [2024-11-19 13:19:51.418208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.046 [2024-11-19 13:19:51.418223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.046 qpair failed and we were unable to recover it. 00:27:48.306 [2024-11-19 13:19:51.428167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.306 [2024-11-19 13:19:51.428220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.306 [2024-11-19 13:19:51.428237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.306 [2024-11-19 13:19:51.428244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.306 [2024-11-19 13:19:51.428249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.306 [2024-11-19 13:19:51.428264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-11-19 13:19:51.438199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.306 [2024-11-19 13:19:51.438254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.306 [2024-11-19 13:19:51.438268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.306 [2024-11-19 13:19:51.438274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.306 [2024-11-19 13:19:51.438280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.306 [2024-11-19 13:19:51.438294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-11-19 13:19:51.448356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.306 [2024-11-19 13:19:51.448421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.306 [2024-11-19 13:19:51.448434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.306 [2024-11-19 13:19:51.448441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.306 [2024-11-19 13:19:51.448447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.306 [2024-11-19 13:19:51.448462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-11-19 13:19:51.458287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.306 [2024-11-19 13:19:51.458344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.306 [2024-11-19 13:19:51.458357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.306 [2024-11-19 13:19:51.458363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.306 [2024-11-19 13:19:51.458369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.306 [2024-11-19 13:19:51.458384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-11-19 13:19:51.468304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.306 [2024-11-19 13:19:51.468354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.306 [2024-11-19 13:19:51.468367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.306 [2024-11-19 13:19:51.468374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.306 [2024-11-19 13:19:51.468383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.306 [2024-11-19 13:19:51.468398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-11-19 13:19:51.478336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.306 [2024-11-19 13:19:51.478392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.306 [2024-11-19 13:19:51.478406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.306 [2024-11-19 13:19:51.478413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.306 [2024-11-19 13:19:51.478419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.306 [2024-11-19 13:19:51.478433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-11-19 13:19:51.488326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.306 [2024-11-19 13:19:51.488409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.306 [2024-11-19 13:19:51.488422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.306 [2024-11-19 13:19:51.488428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.306 [2024-11-19 13:19:51.488434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.306 [2024-11-19 13:19:51.488448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-11-19 13:19:51.498356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.306 [2024-11-19 13:19:51.498409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.306 [2024-11-19 13:19:51.498423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.306 [2024-11-19 13:19:51.498430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.306 [2024-11-19 13:19:51.498436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.306 [2024-11-19 13:19:51.498450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.306 [2024-11-19 13:19:51.508394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.306 [2024-11-19 13:19:51.508450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.306 [2024-11-19 13:19:51.508463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.306 [2024-11-19 13:19:51.508469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.306 [2024-11-19 13:19:51.508475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.306 [2024-11-19 13:19:51.508490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.306 qpair failed and we were unable to recover it. 00:27:48.307 [2024-11-19 13:19:51.518422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.307 [2024-11-19 13:19:51.518482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.307 [2024-11-19 13:19:51.518495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.307 [2024-11-19 13:19:51.518501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.307 [2024-11-19 13:19:51.518507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.307 [2024-11-19 13:19:51.518522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-11-19 13:19:51.528428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.307 [2024-11-19 13:19:51.528480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.307 [2024-11-19 13:19:51.528493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.307 [2024-11-19 13:19:51.528500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.307 [2024-11-19 13:19:51.528506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.307 [2024-11-19 13:19:51.528520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-11-19 13:19:51.538475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.307 [2024-11-19 13:19:51.538565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.307 [2024-11-19 13:19:51.538579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.307 [2024-11-19 13:19:51.538585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.307 [2024-11-19 13:19:51.538591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.307 [2024-11-19 13:19:51.538605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-11-19 13:19:51.548427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.307 [2024-11-19 13:19:51.548486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.307 [2024-11-19 13:19:51.548499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.307 [2024-11-19 13:19:51.548506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.307 [2024-11-19 13:19:51.548512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.307 [2024-11-19 13:19:51.548527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-11-19 13:19:51.558457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.307 [2024-11-19 13:19:51.558518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.307 [2024-11-19 13:19:51.558531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.307 [2024-11-19 13:19:51.558538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.307 [2024-11-19 13:19:51.558544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.307 [2024-11-19 13:19:51.558559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-11-19 13:19:51.568524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.307 [2024-11-19 13:19:51.568613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.307 [2024-11-19 13:19:51.568627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.307 [2024-11-19 13:19:51.568633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.307 [2024-11-19 13:19:51.568639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.307 [2024-11-19 13:19:51.568655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-11-19 13:19:51.578504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.307 [2024-11-19 13:19:51.578560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.307 [2024-11-19 13:19:51.578573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.307 [2024-11-19 13:19:51.578580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.307 [2024-11-19 13:19:51.578586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.307 [2024-11-19 13:19:51.578601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-11-19 13:19:51.588614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.307 [2024-11-19 13:19:51.588665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.307 [2024-11-19 13:19:51.588678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.307 [2024-11-19 13:19:51.588685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.307 [2024-11-19 13:19:51.588691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.307 [2024-11-19 13:19:51.588706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-11-19 13:19:51.598576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.307 [2024-11-19 13:19:51.598634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.307 [2024-11-19 13:19:51.598648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.307 [2024-11-19 13:19:51.598658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.307 [2024-11-19 13:19:51.598664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.307 [2024-11-19 13:19:51.598679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-11-19 13:19:51.608675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.307 [2024-11-19 13:19:51.608731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.307 [2024-11-19 13:19:51.608744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.307 [2024-11-19 13:19:51.608751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.307 [2024-11-19 13:19:51.608757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.307 [2024-11-19 13:19:51.608773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-11-19 13:19:51.618637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.307 [2024-11-19 13:19:51.618688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.307 [2024-11-19 13:19:51.618701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.307 [2024-11-19 13:19:51.618707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.307 [2024-11-19 13:19:51.618713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.307 [2024-11-19 13:19:51.618728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-11-19 13:19:51.628716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.307 [2024-11-19 13:19:51.628778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.307 [2024-11-19 13:19:51.628792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.307 [2024-11-19 13:19:51.628799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.307 [2024-11-19 13:19:51.628805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.307 [2024-11-19 13:19:51.628819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.307 [2024-11-19 13:19:51.638683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.307 [2024-11-19 13:19:51.638739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.307 [2024-11-19 13:19:51.638753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.307 [2024-11-19 13:19:51.638759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.307 [2024-11-19 13:19:51.638765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.307 [2024-11-19 13:19:51.638783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.307 qpair failed and we were unable to recover it. 00:27:48.308 [2024-11-19 13:19:51.648782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.308 [2024-11-19 13:19:51.648860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.308 [2024-11-19 13:19:51.648874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.308 [2024-11-19 13:19:51.648881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.308 [2024-11-19 13:19:51.648886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.308 [2024-11-19 13:19:51.648900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-11-19 13:19:51.658755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.308 [2024-11-19 13:19:51.658807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.308 [2024-11-19 13:19:51.658820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.308 [2024-11-19 13:19:51.658827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.308 [2024-11-19 13:19:51.658833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.308 [2024-11-19 13:19:51.658848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-11-19 13:19:51.668769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.308 [2024-11-19 13:19:51.668859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.308 [2024-11-19 13:19:51.668873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.308 [2024-11-19 13:19:51.668879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.308 [2024-11-19 13:19:51.668885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.308 [2024-11-19 13:19:51.668901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.308 [2024-11-19 13:19:51.678828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.308 [2024-11-19 13:19:51.678885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.308 [2024-11-19 13:19:51.678899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.308 [2024-11-19 13:19:51.678905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.308 [2024-11-19 13:19:51.678911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.308 [2024-11-19 13:19:51.678926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.308 qpair failed and we were unable to recover it. 00:27:48.568 [2024-11-19 13:19:51.688832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.568 [2024-11-19 13:19:51.688893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.568 [2024-11-19 13:19:51.688906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.568 [2024-11-19 13:19:51.688913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.568 [2024-11-19 13:19:51.688919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.568 [2024-11-19 13:19:51.688934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.568 qpair failed and we were unable to recover it. 00:27:48.568 [2024-11-19 13:19:51.698885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.568 [2024-11-19 13:19:51.698978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.568 [2024-11-19 13:19:51.698993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.568 [2024-11-19 13:19:51.698999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.568 [2024-11-19 13:19:51.699005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.568 [2024-11-19 13:19:51.699020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.568 qpair failed and we were unable to recover it. 00:27:48.568 [2024-11-19 13:19:51.708881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.568 [2024-11-19 13:19:51.708939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.568 [2024-11-19 13:19:51.708958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.568 [2024-11-19 13:19:51.708965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.568 [2024-11-19 13:19:51.708971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.568 [2024-11-19 13:19:51.708987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.568 qpair failed and we were unable to recover it. 00:27:48.569 [2024-11-19 13:19:51.718919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.569 [2024-11-19 13:19:51.718981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.569 [2024-11-19 13:19:51.718996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.569 [2024-11-19 13:19:51.719003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.569 [2024-11-19 13:19:51.719009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.569 [2024-11-19 13:19:51.719024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.569 qpair failed and we were unable to recover it. 00:27:48.569 [2024-11-19 13:19:51.728938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.569 [2024-11-19 13:19:51.729002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.569 [2024-11-19 13:19:51.729016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.569 [2024-11-19 13:19:51.729026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.569 [2024-11-19 13:19:51.729032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.569 [2024-11-19 13:19:51.729047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.569 qpair failed and we were unable to recover it. 00:27:48.569 [2024-11-19 13:19:51.738970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.569 [2024-11-19 13:19:51.739033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.569 [2024-11-19 13:19:51.739046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.569 [2024-11-19 13:19:51.739053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.569 [2024-11-19 13:19:51.739058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.569 [2024-11-19 13:19:51.739074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.569 qpair failed and we were unable to recover it. 00:27:48.569 [2024-11-19 13:19:51.749043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.569 [2024-11-19 13:19:51.749097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.569 [2024-11-19 13:19:51.749110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.569 [2024-11-19 13:19:51.749117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.569 [2024-11-19 13:19:51.749123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.569 [2024-11-19 13:19:51.749138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.569 qpair failed and we were unable to recover it. 00:27:48.569 [2024-11-19 13:19:51.759016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.569 [2024-11-19 13:19:51.759074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.569 [2024-11-19 13:19:51.759087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.569 [2024-11-19 13:19:51.759094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.569 [2024-11-19 13:19:51.759100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.569 [2024-11-19 13:19:51.759115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.569 qpair failed and we were unable to recover it. 00:27:48.569 [2024-11-19 13:19:51.769114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.569 [2024-11-19 13:19:51.769167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.569 [2024-11-19 13:19:51.769181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.569 [2024-11-19 13:19:51.769188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.569 [2024-11-19 13:19:51.769194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.569 [2024-11-19 13:19:51.769214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.569 qpair failed and we were unable to recover it. 00:27:48.569 [2024-11-19 13:19:51.779151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.569 [2024-11-19 13:19:51.779206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.569 [2024-11-19 13:19:51.779221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.569 [2024-11-19 13:19:51.779227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.569 [2024-11-19 13:19:51.779234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.569 [2024-11-19 13:19:51.779248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.569 qpair failed and we were unable to recover it. 00:27:48.569 [2024-11-19 13:19:51.789140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.569 [2024-11-19 13:19:51.789192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.569 [2024-11-19 13:19:51.789206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.569 [2024-11-19 13:19:51.789213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.569 [2024-11-19 13:19:51.789219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.569 [2024-11-19 13:19:51.789233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.569 qpair failed and we were unable to recover it. 00:27:48.569 [2024-11-19 13:19:51.799201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.569 [2024-11-19 13:19:51.799257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.569 [2024-11-19 13:19:51.799271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.569 [2024-11-19 13:19:51.799277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.569 [2024-11-19 13:19:51.799284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.569 [2024-11-19 13:19:51.799298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.569 qpair failed and we were unable to recover it. 00:27:48.569 [2024-11-19 13:19:51.809253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.569 [2024-11-19 13:19:51.809316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.569 [2024-11-19 13:19:51.809331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.569 [2024-11-19 13:19:51.809338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.569 [2024-11-19 13:19:51.809344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.569 [2024-11-19 13:19:51.809358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.569 qpair failed and we were unable to recover it. 00:27:48.569 [2024-11-19 13:19:51.819188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.569 [2024-11-19 13:19:51.819249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.569 [2024-11-19 13:19:51.819263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.569 [2024-11-19 13:19:51.819269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.569 [2024-11-19 13:19:51.819276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.569 [2024-11-19 13:19:51.819290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.569 qpair failed and we were unable to recover it. 00:27:48.569 [2024-11-19 13:19:51.829291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.569 [2024-11-19 13:19:51.829349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.569 [2024-11-19 13:19:51.829363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.569 [2024-11-19 13:19:51.829370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.569 [2024-11-19 13:19:51.829375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.569 [2024-11-19 13:19:51.829391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.569 qpair failed and we were unable to recover it. 00:27:48.569 [2024-11-19 13:19:51.839295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.569 [2024-11-19 13:19:51.839350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.569 [2024-11-19 13:19:51.839363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.569 [2024-11-19 13:19:51.839369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.569 [2024-11-19 13:19:51.839375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.570 [2024-11-19 13:19:51.839389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.570 qpair failed and we were unable to recover it. 00:27:48.570 [2024-11-19 13:19:51.849270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.570 [2024-11-19 13:19:51.849333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.570 [2024-11-19 13:19:51.849346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.570 [2024-11-19 13:19:51.849352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.570 [2024-11-19 13:19:51.849358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.570 [2024-11-19 13:19:51.849373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.570 qpair failed and we were unable to recover it. 00:27:48.570 [2024-11-19 13:19:51.859319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.570 [2024-11-19 13:19:51.859372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.570 [2024-11-19 13:19:51.859388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.570 [2024-11-19 13:19:51.859395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.570 [2024-11-19 13:19:51.859401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.570 [2024-11-19 13:19:51.859416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.570 qpair failed and we were unable to recover it. 00:27:48.570 [2024-11-19 13:19:51.869374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.570 [2024-11-19 13:19:51.869426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.570 [2024-11-19 13:19:51.869440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.570 [2024-11-19 13:19:51.869446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.570 [2024-11-19 13:19:51.869452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.570 [2024-11-19 13:19:51.869466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.570 qpair failed and we were unable to recover it. 00:27:48.570 [2024-11-19 13:19:51.879437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.570 [2024-11-19 13:19:51.879490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.570 [2024-11-19 13:19:51.879505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.570 [2024-11-19 13:19:51.879511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.570 [2024-11-19 13:19:51.879517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.570 [2024-11-19 13:19:51.879532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.570 qpair failed and we were unable to recover it. 00:27:48.570 [2024-11-19 13:19:51.889383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.570 [2024-11-19 13:19:51.889439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.570 [2024-11-19 13:19:51.889452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.570 [2024-11-19 13:19:51.889459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.570 [2024-11-19 13:19:51.889465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.570 [2024-11-19 13:19:51.889480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.570 qpair failed and we were unable to recover it. 00:27:48.570 [2024-11-19 13:19:51.899435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.570 [2024-11-19 13:19:51.899491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.570 [2024-11-19 13:19:51.899505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.570 [2024-11-19 13:19:51.899511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.570 [2024-11-19 13:19:51.899521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.570 [2024-11-19 13:19:51.899535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.570 qpair failed and we were unable to recover it. 00:27:48.570 [2024-11-19 13:19:51.909507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.570 [2024-11-19 13:19:51.909563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.570 [2024-11-19 13:19:51.909577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.570 [2024-11-19 13:19:51.909584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.570 [2024-11-19 13:19:51.909590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.570 [2024-11-19 13:19:51.909605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.570 qpair failed and we were unable to recover it. 00:27:48.570 [2024-11-19 13:19:51.919476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.570 [2024-11-19 13:19:51.919533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.570 [2024-11-19 13:19:51.919546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.570 [2024-11-19 13:19:51.919553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.570 [2024-11-19 13:19:51.919559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.570 [2024-11-19 13:19:51.919574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.570 qpair failed and we were unable to recover it. 00:27:48.570 [2024-11-19 13:19:51.929514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.570 [2024-11-19 13:19:51.929571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.570 [2024-11-19 13:19:51.929584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.570 [2024-11-19 13:19:51.929591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.570 [2024-11-19 13:19:51.929597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.570 [2024-11-19 13:19:51.929612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.570 qpair failed and we were unable to recover it. 00:27:48.570 [2024-11-19 13:19:51.939536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.570 [2024-11-19 13:19:51.939586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.570 [2024-11-19 13:19:51.939600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.570 [2024-11-19 13:19:51.939606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.570 [2024-11-19 13:19:51.939613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.570 [2024-11-19 13:19:51.939627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.570 qpair failed and we were unable to recover it. 00:27:48.831 [2024-11-19 13:19:51.949615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.831 [2024-11-19 13:19:51.949667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.831 [2024-11-19 13:19:51.949680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.831 [2024-11-19 13:19:51.949687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.831 [2024-11-19 13:19:51.949693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.831 [2024-11-19 13:19:51.949707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.831 qpair failed and we were unable to recover it. 00:27:48.831 [2024-11-19 13:19:51.959651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.831 [2024-11-19 13:19:51.959740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.831 [2024-11-19 13:19:51.959753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.831 [2024-11-19 13:19:51.959759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.831 [2024-11-19 13:19:51.959765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.831 [2024-11-19 13:19:51.959780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.831 qpair failed and we were unable to recover it. 00:27:48.831 [2024-11-19 13:19:51.969682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.831 [2024-11-19 13:19:51.969737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.831 [2024-11-19 13:19:51.969750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.831 [2024-11-19 13:19:51.969757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.831 [2024-11-19 13:19:51.969763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.831 [2024-11-19 13:19:51.969777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.831 qpair failed and we were unable to recover it. 00:27:48.831 [2024-11-19 13:19:51.979707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.831 [2024-11-19 13:19:51.979761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.831 [2024-11-19 13:19:51.979774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.831 [2024-11-19 13:19:51.979781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.831 [2024-11-19 13:19:51.979787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.831 [2024-11-19 13:19:51.979801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.831 qpair failed and we were unable to recover it. 00:27:48.831 [2024-11-19 13:19:51.989743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.831 [2024-11-19 13:19:51.989800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.831 [2024-11-19 13:19:51.989817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.831 [2024-11-19 13:19:51.989824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.831 [2024-11-19 13:19:51.989830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.831 [2024-11-19 13:19:51.989844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.831 qpair failed and we were unable to recover it. 00:27:48.831 [2024-11-19 13:19:51.999787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.831 [2024-11-19 13:19:51.999842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.831 [2024-11-19 13:19:51.999857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.831 [2024-11-19 13:19:51.999863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.831 [2024-11-19 13:19:51.999869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.831 [2024-11-19 13:19:51.999884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.831 qpair failed and we were unable to recover it. 00:27:48.831 [2024-11-19 13:19:52.009839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.831 [2024-11-19 13:19:52.009895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.831 [2024-11-19 13:19:52.009908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.831 [2024-11-19 13:19:52.009915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.831 [2024-11-19 13:19:52.009921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.831 [2024-11-19 13:19:52.009936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.831 qpair failed and we were unable to recover it. 00:27:48.831 [2024-11-19 13:19:52.019832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.831 [2024-11-19 13:19:52.019884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.831 [2024-11-19 13:19:52.019898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.831 [2024-11-19 13:19:52.019905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.831 [2024-11-19 13:19:52.019911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.831 [2024-11-19 13:19:52.019925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.831 qpair failed and we were unable to recover it. 00:27:48.831 [2024-11-19 13:19:52.029862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.831 [2024-11-19 13:19:52.029917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.831 [2024-11-19 13:19:52.029930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.831 [2024-11-19 13:19:52.029937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.831 [2024-11-19 13:19:52.029950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.831 [2024-11-19 13:19:52.029965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.831 qpair failed and we were unable to recover it. 00:27:48.831 [2024-11-19 13:19:52.039833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.831 [2024-11-19 13:19:52.039889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.831 [2024-11-19 13:19:52.039902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.831 [2024-11-19 13:19:52.039908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.831 [2024-11-19 13:19:52.039915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.831 [2024-11-19 13:19:52.039929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.831 qpair failed and we were unable to recover it. 00:27:48.831 [2024-11-19 13:19:52.049926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.831 [2024-11-19 13:19:52.049988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.831 [2024-11-19 13:19:52.050002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.831 [2024-11-19 13:19:52.050009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.831 [2024-11-19 13:19:52.050015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.831 [2024-11-19 13:19:52.050030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.831 qpair failed and we were unable to recover it. 00:27:48.831 [2024-11-19 13:19:52.060012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.831 [2024-11-19 13:19:52.060069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.831 [2024-11-19 13:19:52.060083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.831 [2024-11-19 13:19:52.060091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.831 [2024-11-19 13:19:52.060097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.831 [2024-11-19 13:19:52.060111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.831 qpair failed and we were unable to recover it. 00:27:48.831 [2024-11-19 13:19:52.069975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.831 [2024-11-19 13:19:52.070030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.832 [2024-11-19 13:19:52.070044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.832 [2024-11-19 13:19:52.070050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.832 [2024-11-19 13:19:52.070056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.832 [2024-11-19 13:19:52.070070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.832 qpair failed and we were unable to recover it. 00:27:48.832 [2024-11-19 13:19:52.080021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.832 [2024-11-19 13:19:52.080089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.832 [2024-11-19 13:19:52.080103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.832 [2024-11-19 13:19:52.080110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.832 [2024-11-19 13:19:52.080116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.832 [2024-11-19 13:19:52.080130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.832 qpair failed and we were unable to recover it. 00:27:48.832 [2024-11-19 13:19:52.090041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.832 [2024-11-19 13:19:52.090094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.832 [2024-11-19 13:19:52.090108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.832 [2024-11-19 13:19:52.090115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.832 [2024-11-19 13:19:52.090121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.832 [2024-11-19 13:19:52.090135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.832 qpair failed and we were unable to recover it. 00:27:48.832 [2024-11-19 13:19:52.100072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.832 [2024-11-19 13:19:52.100127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.832 [2024-11-19 13:19:52.100141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.832 [2024-11-19 13:19:52.100148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.832 [2024-11-19 13:19:52.100154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.832 [2024-11-19 13:19:52.100169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.832 qpair failed and we were unable to recover it. 00:27:48.832 [2024-11-19 13:19:52.110117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.832 [2024-11-19 13:19:52.110185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.832 [2024-11-19 13:19:52.110199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.832 [2024-11-19 13:19:52.110206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.832 [2024-11-19 13:19:52.110212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.832 [2024-11-19 13:19:52.110227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.832 qpair failed and we were unable to recover it. 00:27:48.832 [2024-11-19 13:19:52.120144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.832 [2024-11-19 13:19:52.120207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.832 [2024-11-19 13:19:52.120221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.832 [2024-11-19 13:19:52.120227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.832 [2024-11-19 13:19:52.120233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.832 [2024-11-19 13:19:52.120247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.832 qpair failed and we were unable to recover it. 00:27:48.832 [2024-11-19 13:19:52.130152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.832 [2024-11-19 13:19:52.130206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.832 [2024-11-19 13:19:52.130219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.832 [2024-11-19 13:19:52.130225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.832 [2024-11-19 13:19:52.130231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.832 [2024-11-19 13:19:52.130245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.832 qpair failed and we were unable to recover it. 00:27:48.832 [2024-11-19 13:19:52.140227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.832 [2024-11-19 13:19:52.140283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.832 [2024-11-19 13:19:52.140296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.832 [2024-11-19 13:19:52.140302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.832 [2024-11-19 13:19:52.140308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.832 [2024-11-19 13:19:52.140323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.832 qpair failed and we were unable to recover it. 00:27:48.832 [2024-11-19 13:19:52.150232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.832 [2024-11-19 13:19:52.150287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.832 [2024-11-19 13:19:52.150300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.832 [2024-11-19 13:19:52.150307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.832 [2024-11-19 13:19:52.150313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.832 [2024-11-19 13:19:52.150327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.832 qpair failed and we were unable to recover it. 00:27:48.832 [2024-11-19 13:19:52.160243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.832 [2024-11-19 13:19:52.160298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.832 [2024-11-19 13:19:52.160311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.832 [2024-11-19 13:19:52.160322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.832 [2024-11-19 13:19:52.160328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.832 [2024-11-19 13:19:52.160342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.832 qpair failed and we were unable to recover it. 00:27:48.832 [2024-11-19 13:19:52.170268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.832 [2024-11-19 13:19:52.170326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.832 [2024-11-19 13:19:52.170340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.832 [2024-11-19 13:19:52.170346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.832 [2024-11-19 13:19:52.170352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.832 [2024-11-19 13:19:52.170367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.832 qpair failed and we were unable to recover it. 00:27:48.832 [2024-11-19 13:19:52.180294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.832 [2024-11-19 13:19:52.180372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.832 [2024-11-19 13:19:52.180385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.832 [2024-11-19 13:19:52.180392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.832 [2024-11-19 13:19:52.180398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.832 [2024-11-19 13:19:52.180412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.832 qpair failed and we were unable to recover it. 00:27:48.832 [2024-11-19 13:19:52.190318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.832 [2024-11-19 13:19:52.190369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.832 [2024-11-19 13:19:52.190382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.832 [2024-11-19 13:19:52.190388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.832 [2024-11-19 13:19:52.190394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.832 [2024-11-19 13:19:52.190409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.832 qpair failed and we were unable to recover it. 00:27:48.832 [2024-11-19 13:19:52.200406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.832 [2024-11-19 13:19:52.200468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.833 [2024-11-19 13:19:52.200482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.833 [2024-11-19 13:19:52.200488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.833 [2024-11-19 13:19:52.200494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:48.833 [2024-11-19 13:19:52.200512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:48.833 qpair failed and we were unable to recover it. 00:27:49.093 [2024-11-19 13:19:52.210385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.093 [2024-11-19 13:19:52.210438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.093 [2024-11-19 13:19:52.210452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.093 [2024-11-19 13:19:52.210458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.093 [2024-11-19 13:19:52.210464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.093 [2024-11-19 13:19:52.210479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.093 qpair failed and we were unable to recover it. 00:27:49.093 [2024-11-19 13:19:52.220417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.093 [2024-11-19 13:19:52.220468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.093 [2024-11-19 13:19:52.220482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.093 [2024-11-19 13:19:52.220488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.093 [2024-11-19 13:19:52.220495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.093 [2024-11-19 13:19:52.220509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.093 qpair failed and we were unable to recover it. 00:27:49.093 [2024-11-19 13:19:52.230453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.093 [2024-11-19 13:19:52.230504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.093 [2024-11-19 13:19:52.230518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.093 [2024-11-19 13:19:52.230524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.093 [2024-11-19 13:19:52.230530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.093 [2024-11-19 13:19:52.230545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.093 qpair failed and we were unable to recover it. 00:27:49.093 [2024-11-19 13:19:52.240483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.093 [2024-11-19 13:19:52.240541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.093 [2024-11-19 13:19:52.240555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.093 [2024-11-19 13:19:52.240561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.093 [2024-11-19 13:19:52.240567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.093 [2024-11-19 13:19:52.240582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.093 qpair failed and we were unable to recover it. 00:27:49.093 [2024-11-19 13:19:52.250452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.093 [2024-11-19 13:19:52.250514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.093 [2024-11-19 13:19:52.250527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.093 [2024-11-19 13:19:52.250534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.093 [2024-11-19 13:19:52.250540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.093 [2024-11-19 13:19:52.250555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.093 qpair failed and we were unable to recover it. 00:27:49.093 [2024-11-19 13:19:52.260538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.093 [2024-11-19 13:19:52.260591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.093 [2024-11-19 13:19:52.260605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.093 [2024-11-19 13:19:52.260612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.093 [2024-11-19 13:19:52.260617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.093 [2024-11-19 13:19:52.260632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.093 qpair failed and we were unable to recover it. 00:27:49.093 [2024-11-19 13:19:52.270553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.093 [2024-11-19 13:19:52.270609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.093 [2024-11-19 13:19:52.270624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.093 [2024-11-19 13:19:52.270631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.093 [2024-11-19 13:19:52.270637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.093 [2024-11-19 13:19:52.270652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.093 qpair failed and we were unable to recover it. 00:27:49.093 [2024-11-19 13:19:52.280591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.093 [2024-11-19 13:19:52.280646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.093 [2024-11-19 13:19:52.280659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.093 [2024-11-19 13:19:52.280666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.093 [2024-11-19 13:19:52.280672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.093 [2024-11-19 13:19:52.280686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.093 qpair failed and we were unable to recover it. 00:27:49.093 [2024-11-19 13:19:52.290603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.093 [2024-11-19 13:19:52.290656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.094 [2024-11-19 13:19:52.290673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.094 [2024-11-19 13:19:52.290679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.094 [2024-11-19 13:19:52.290685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.094 [2024-11-19 13:19:52.290699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.094 qpair failed and we were unable to recover it. 00:27:49.094 [2024-11-19 13:19:52.300680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.094 [2024-11-19 13:19:52.300742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.094 [2024-11-19 13:19:52.300756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.094 [2024-11-19 13:19:52.300763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.094 [2024-11-19 13:19:52.300769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.094 [2024-11-19 13:19:52.300783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.094 qpair failed and we were unable to recover it. 00:27:49.094 [2024-11-19 13:19:52.310668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.094 [2024-11-19 13:19:52.310723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.094 [2024-11-19 13:19:52.310737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.094 [2024-11-19 13:19:52.310744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.094 [2024-11-19 13:19:52.310750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.094 [2024-11-19 13:19:52.310764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.094 qpair failed and we were unable to recover it. 00:27:49.094 [2024-11-19 13:19:52.320712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.094 [2024-11-19 13:19:52.320767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.094 [2024-11-19 13:19:52.320781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.094 [2024-11-19 13:19:52.320787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.094 [2024-11-19 13:19:52.320793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.094 [2024-11-19 13:19:52.320808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.094 qpair failed and we were unable to recover it. 00:27:49.094 [2024-11-19 13:19:52.330744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.094 [2024-11-19 13:19:52.330797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.094 [2024-11-19 13:19:52.330813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.094 [2024-11-19 13:19:52.330820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.094 [2024-11-19 13:19:52.330827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.094 [2024-11-19 13:19:52.330846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.094 qpair failed and we were unable to recover it. 00:27:49.094 [2024-11-19 13:19:52.340781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.094 [2024-11-19 13:19:52.340838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.094 [2024-11-19 13:19:52.340852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.094 [2024-11-19 13:19:52.340859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.094 [2024-11-19 13:19:52.340865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.094 [2024-11-19 13:19:52.340879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.094 qpair failed and we were unable to recover it. 00:27:49.094 [2024-11-19 13:19:52.350782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.094 [2024-11-19 13:19:52.350878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.094 [2024-11-19 13:19:52.350909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.094 [2024-11-19 13:19:52.350916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.094 [2024-11-19 13:19:52.350922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.094 [2024-11-19 13:19:52.350954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.094 qpair failed and we were unable to recover it. 00:27:49.094 [2024-11-19 13:19:52.360837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.094 [2024-11-19 13:19:52.360892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.094 [2024-11-19 13:19:52.360907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.094 [2024-11-19 13:19:52.360914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.094 [2024-11-19 13:19:52.360920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.094 [2024-11-19 13:19:52.360935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.094 qpair failed and we were unable to recover it. 00:27:49.094 [2024-11-19 13:19:52.370846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.094 [2024-11-19 13:19:52.370899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.094 [2024-11-19 13:19:52.370913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.094 [2024-11-19 13:19:52.370920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.094 [2024-11-19 13:19:52.370926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.094 [2024-11-19 13:19:52.370941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.094 qpair failed and we were unable to recover it. 00:27:49.094 [2024-11-19 13:19:52.380871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.094 [2024-11-19 13:19:52.380955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.094 [2024-11-19 13:19:52.380969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.094 [2024-11-19 13:19:52.380975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.094 [2024-11-19 13:19:52.380981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.094 [2024-11-19 13:19:52.380996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.094 qpair failed and we were unable to recover it. 00:27:49.094 [2024-11-19 13:19:52.390903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.094 [2024-11-19 13:19:52.390967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.094 [2024-11-19 13:19:52.390981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.094 [2024-11-19 13:19:52.390988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.094 [2024-11-19 13:19:52.390994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.094 [2024-11-19 13:19:52.391009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.094 qpair failed and we were unable to recover it. 00:27:49.094 [2024-11-19 13:19:52.400932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.094 [2024-11-19 13:19:52.400993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.094 [2024-11-19 13:19:52.401006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.094 [2024-11-19 13:19:52.401013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.094 [2024-11-19 13:19:52.401019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.094 [2024-11-19 13:19:52.401034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.094 qpair failed and we were unable to recover it. 00:27:49.094 [2024-11-19 13:19:52.410960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.094 [2024-11-19 13:19:52.411040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.094 [2024-11-19 13:19:52.411054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.094 [2024-11-19 13:19:52.411060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.094 [2024-11-19 13:19:52.411066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.094 [2024-11-19 13:19:52.411081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.094 qpair failed and we were unable to recover it. 00:27:49.094 [2024-11-19 13:19:52.421029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.095 [2024-11-19 13:19:52.421085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.095 [2024-11-19 13:19:52.421102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.095 [2024-11-19 13:19:52.421109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.095 [2024-11-19 13:19:52.421114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.095 [2024-11-19 13:19:52.421129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.095 qpair failed and we were unable to recover it. 00:27:49.095 [2024-11-19 13:19:52.431038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.095 [2024-11-19 13:19:52.431103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.095 [2024-11-19 13:19:52.431116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.095 [2024-11-19 13:19:52.431123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.095 [2024-11-19 13:19:52.431129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.095 [2024-11-19 13:19:52.431144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.095 qpair failed and we were unable to recover it. 00:27:49.095 [2024-11-19 13:19:52.441048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.095 [2024-11-19 13:19:52.441106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.095 [2024-11-19 13:19:52.441119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.095 [2024-11-19 13:19:52.441125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.095 [2024-11-19 13:19:52.441131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.095 [2024-11-19 13:19:52.441145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.095 qpair failed and we were unable to recover it. 00:27:49.095 [2024-11-19 13:19:52.451065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.095 [2024-11-19 13:19:52.451116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.095 [2024-11-19 13:19:52.451129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.095 [2024-11-19 13:19:52.451136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.095 [2024-11-19 13:19:52.451141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.095 [2024-11-19 13:19:52.451156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.095 qpair failed and we were unable to recover it. 00:27:49.095 [2024-11-19 13:19:52.461095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.095 [2024-11-19 13:19:52.461145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.095 [2024-11-19 13:19:52.461158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.095 [2024-11-19 13:19:52.461164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.095 [2024-11-19 13:19:52.461174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.095 [2024-11-19 13:19:52.461188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.095 qpair failed and we were unable to recover it. 00:27:49.355 [2024-11-19 13:19:52.471122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.355 [2024-11-19 13:19:52.471183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.355 [2024-11-19 13:19:52.471197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.355 [2024-11-19 13:19:52.471203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.355 [2024-11-19 13:19:52.471209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.355 [2024-11-19 13:19:52.471224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.355 qpair failed and we were unable to recover it. 00:27:49.355 [2024-11-19 13:19:52.481157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.355 [2024-11-19 13:19:52.481214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.355 [2024-11-19 13:19:52.481228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.355 [2024-11-19 13:19:52.481234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.355 [2024-11-19 13:19:52.481241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.355 [2024-11-19 13:19:52.481255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.355 qpair failed and we were unable to recover it. 00:27:49.355 [2024-11-19 13:19:52.491197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.355 [2024-11-19 13:19:52.491255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.355 [2024-11-19 13:19:52.491269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.355 [2024-11-19 13:19:52.491276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.355 [2024-11-19 13:19:52.491281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.355 [2024-11-19 13:19:52.491296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.355 qpair failed and we were unable to recover it. 00:27:49.355 [2024-11-19 13:19:52.501216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.355 [2024-11-19 13:19:52.501264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.355 [2024-11-19 13:19:52.501277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.355 [2024-11-19 13:19:52.501284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.355 [2024-11-19 13:19:52.501290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.355 [2024-11-19 13:19:52.501305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.355 qpair failed and we were unable to recover it. 00:27:49.355 [2024-11-19 13:19:52.511241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.355 [2024-11-19 13:19:52.511292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.355 [2024-11-19 13:19:52.511305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.355 [2024-11-19 13:19:52.511311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.355 [2024-11-19 13:19:52.511317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.355 [2024-11-19 13:19:52.511332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.355 qpair failed and we were unable to recover it. 00:27:49.355 [2024-11-19 13:19:52.521331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.355 [2024-11-19 13:19:52.521386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.355 [2024-11-19 13:19:52.521399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.355 [2024-11-19 13:19:52.521406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.355 [2024-11-19 13:19:52.521412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.355 [2024-11-19 13:19:52.521427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.355 qpair failed and we were unable to recover it. 00:27:49.355 [2024-11-19 13:19:52.531317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.355 [2024-11-19 13:19:52.531373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.355 [2024-11-19 13:19:52.531387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.355 [2024-11-19 13:19:52.531393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.355 [2024-11-19 13:19:52.531399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.356 [2024-11-19 13:19:52.531414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.356 qpair failed and we were unable to recover it. 00:27:49.356 [2024-11-19 13:19:52.541339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.356 [2024-11-19 13:19:52.541394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.356 [2024-11-19 13:19:52.541408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.356 [2024-11-19 13:19:52.541415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.356 [2024-11-19 13:19:52.541420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.356 [2024-11-19 13:19:52.541435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.356 qpair failed and we were unable to recover it. 00:27:49.356 [2024-11-19 13:19:52.551369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.356 [2024-11-19 13:19:52.551423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.356 [2024-11-19 13:19:52.551439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.356 [2024-11-19 13:19:52.551446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.356 [2024-11-19 13:19:52.551452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.356 [2024-11-19 13:19:52.551467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.356 qpair failed and we were unable to recover it. 00:27:49.356 [2024-11-19 13:19:52.561401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.356 [2024-11-19 13:19:52.561464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.356 [2024-11-19 13:19:52.561478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.356 [2024-11-19 13:19:52.561484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.356 [2024-11-19 13:19:52.561490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.356 [2024-11-19 13:19:52.561504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.356 qpair failed and we were unable to recover it. 00:27:49.356 [2024-11-19 13:19:52.571424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.356 [2024-11-19 13:19:52.571509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.356 [2024-11-19 13:19:52.571523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.356 [2024-11-19 13:19:52.571529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.356 [2024-11-19 13:19:52.571535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.356 [2024-11-19 13:19:52.571550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.356 qpair failed and we were unable to recover it. 00:27:49.356 [2024-11-19 13:19:52.581512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.356 [2024-11-19 13:19:52.581571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.356 [2024-11-19 13:19:52.581584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.356 [2024-11-19 13:19:52.581591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.356 [2024-11-19 13:19:52.581597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.356 [2024-11-19 13:19:52.581611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.356 qpair failed and we were unable to recover it. 00:27:49.356 [2024-11-19 13:19:52.591494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.356 [2024-11-19 13:19:52.591562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.356 [2024-11-19 13:19:52.591575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.356 [2024-11-19 13:19:52.591587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.356 [2024-11-19 13:19:52.591593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.356 [2024-11-19 13:19:52.591607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.356 qpair failed and we were unable to recover it. 00:27:49.356 [2024-11-19 13:19:52.601524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.356 [2024-11-19 13:19:52.601582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.356 [2024-11-19 13:19:52.601596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.356 [2024-11-19 13:19:52.601603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.356 [2024-11-19 13:19:52.601608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.356 [2024-11-19 13:19:52.601623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.356 qpair failed and we were unable to recover it. 00:27:49.356 [2024-11-19 13:19:52.611546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.356 [2024-11-19 13:19:52.611605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.356 [2024-11-19 13:19:52.611619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.356 [2024-11-19 13:19:52.611625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.356 [2024-11-19 13:19:52.611631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.356 [2024-11-19 13:19:52.611646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.356 qpair failed and we were unable to recover it. 00:27:49.356 [2024-11-19 13:19:52.621578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.356 [2024-11-19 13:19:52.621628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.356 [2024-11-19 13:19:52.621641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.356 [2024-11-19 13:19:52.621648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.356 [2024-11-19 13:19:52.621654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.356 [2024-11-19 13:19:52.621668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.356 qpair failed and we were unable to recover it. 00:27:49.356 [2024-11-19 13:19:52.631598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.356 [2024-11-19 13:19:52.631650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.356 [2024-11-19 13:19:52.631664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.356 [2024-11-19 13:19:52.631671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.356 [2024-11-19 13:19:52.631677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.356 [2024-11-19 13:19:52.631692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.356 qpair failed and we were unable to recover it. 00:27:49.356 [2024-11-19 13:19:52.641635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.356 [2024-11-19 13:19:52.641690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.356 [2024-11-19 13:19:52.641703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.356 [2024-11-19 13:19:52.641709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.356 [2024-11-19 13:19:52.641715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.356 [2024-11-19 13:19:52.641729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.356 qpair failed and we were unable to recover it. 00:27:49.356 [2024-11-19 13:19:52.651682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.356 [2024-11-19 13:19:52.651736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.356 [2024-11-19 13:19:52.651749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.356 [2024-11-19 13:19:52.651756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.356 [2024-11-19 13:19:52.651762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.356 [2024-11-19 13:19:52.651777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.356 qpair failed and we were unable to recover it. 00:27:49.356 [2024-11-19 13:19:52.661692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.356 [2024-11-19 13:19:52.661742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.356 [2024-11-19 13:19:52.661756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.356 [2024-11-19 13:19:52.661762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.356 [2024-11-19 13:19:52.661768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.357 [2024-11-19 13:19:52.661783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.357 qpair failed and we were unable to recover it. 00:27:49.357 [2024-11-19 13:19:52.671746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.357 [2024-11-19 13:19:52.671835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.357 [2024-11-19 13:19:52.671848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.357 [2024-11-19 13:19:52.671854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.357 [2024-11-19 13:19:52.671860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.357 [2024-11-19 13:19:52.671874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.357 qpair failed and we were unable to recover it. 00:27:49.357 [2024-11-19 13:19:52.681760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.357 [2024-11-19 13:19:52.681819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.357 [2024-11-19 13:19:52.681832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.357 [2024-11-19 13:19:52.681838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.357 [2024-11-19 13:19:52.681844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.357 [2024-11-19 13:19:52.681859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.357 qpair failed and we were unable to recover it. 00:27:49.357 [2024-11-19 13:19:52.691793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.357 [2024-11-19 13:19:52.691852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.357 [2024-11-19 13:19:52.691866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.357 [2024-11-19 13:19:52.691873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.357 [2024-11-19 13:19:52.691879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.357 [2024-11-19 13:19:52.691893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.357 qpair failed and we were unable to recover it. 00:27:49.357 [2024-11-19 13:19:52.701808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.357 [2024-11-19 13:19:52.701860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.357 [2024-11-19 13:19:52.701873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.357 [2024-11-19 13:19:52.701880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.357 [2024-11-19 13:19:52.701886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.357 [2024-11-19 13:19:52.701900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.357 qpair failed and we were unable to recover it. 00:27:49.357 [2024-11-19 13:19:52.711836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.357 [2024-11-19 13:19:52.711885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.357 [2024-11-19 13:19:52.711899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.357 [2024-11-19 13:19:52.711905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.357 [2024-11-19 13:19:52.711911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.357 [2024-11-19 13:19:52.711926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.357 qpair failed and we were unable to recover it. 00:27:49.357 [2024-11-19 13:19:52.721870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.357 [2024-11-19 13:19:52.721923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.357 [2024-11-19 13:19:52.721937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.357 [2024-11-19 13:19:52.721949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.357 [2024-11-19 13:19:52.721956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.357 [2024-11-19 13:19:52.721971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.357 qpair failed and we were unable to recover it. 00:27:49.617 [2024-11-19 13:19:52.731903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.617 [2024-11-19 13:19:52.731961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.617 [2024-11-19 13:19:52.731974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.617 [2024-11-19 13:19:52.731981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.617 [2024-11-19 13:19:52.731987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.617 [2024-11-19 13:19:52.732002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-11-19 13:19:52.741919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.617 [2024-11-19 13:19:52.741973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.617 [2024-11-19 13:19:52.741986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.617 [2024-11-19 13:19:52.741993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.617 [2024-11-19 13:19:52.741999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.617 [2024-11-19 13:19:52.742013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.617 qpair failed and we were unable to recover it. 00:27:49.617 [2024-11-19 13:19:52.751932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.617 [2024-11-19 13:19:52.751991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.617 [2024-11-19 13:19:52.752005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.617 [2024-11-19 13:19:52.752012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.617 [2024-11-19 13:19:52.752017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.617 [2024-11-19 13:19:52.752032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-11-19 13:19:52.762003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.618 [2024-11-19 13:19:52.762066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.618 [2024-11-19 13:19:52.762080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.618 [2024-11-19 13:19:52.762087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.618 [2024-11-19 13:19:52.762093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.618 [2024-11-19 13:19:52.762111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-11-19 13:19:52.772017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.618 [2024-11-19 13:19:52.772072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.618 [2024-11-19 13:19:52.772086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.618 [2024-11-19 13:19:52.772093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.618 [2024-11-19 13:19:52.772099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.618 [2024-11-19 13:19:52.772114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-11-19 13:19:52.782043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.618 [2024-11-19 13:19:52.782099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.618 [2024-11-19 13:19:52.782113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.618 [2024-11-19 13:19:52.782120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.618 [2024-11-19 13:19:52.782126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.618 [2024-11-19 13:19:52.782141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-11-19 13:19:52.792058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.618 [2024-11-19 13:19:52.792112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.618 [2024-11-19 13:19:52.792126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.618 [2024-11-19 13:19:52.792133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.618 [2024-11-19 13:19:52.792139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.618 [2024-11-19 13:19:52.792154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-11-19 13:19:52.802102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.618 [2024-11-19 13:19:52.802163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.618 [2024-11-19 13:19:52.802176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.618 [2024-11-19 13:19:52.802184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.618 [2024-11-19 13:19:52.802190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.618 [2024-11-19 13:19:52.802205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-11-19 13:19:52.812054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.618 [2024-11-19 13:19:52.812118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.618 [2024-11-19 13:19:52.812132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.618 [2024-11-19 13:19:52.812138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.618 [2024-11-19 13:19:52.812144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.618 [2024-11-19 13:19:52.812159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-11-19 13:19:52.822179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.618 [2024-11-19 13:19:52.822236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.618 [2024-11-19 13:19:52.822249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.618 [2024-11-19 13:19:52.822256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.618 [2024-11-19 13:19:52.822262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.618 [2024-11-19 13:19:52.822277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-11-19 13:19:52.832179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.618 [2024-11-19 13:19:52.832230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.618 [2024-11-19 13:19:52.832243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.618 [2024-11-19 13:19:52.832250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.618 [2024-11-19 13:19:52.832256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.618 [2024-11-19 13:19:52.832270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-11-19 13:19:52.842169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.618 [2024-11-19 13:19:52.842261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.618 [2024-11-19 13:19:52.842274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.618 [2024-11-19 13:19:52.842281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.618 [2024-11-19 13:19:52.842287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.618 [2024-11-19 13:19:52.842301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-11-19 13:19:52.852249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.618 [2024-11-19 13:19:52.852305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.618 [2024-11-19 13:19:52.852322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.618 [2024-11-19 13:19:52.852329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.618 [2024-11-19 13:19:52.852334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.618 [2024-11-19 13:19:52.852349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-11-19 13:19:52.862271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.618 [2024-11-19 13:19:52.862320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.618 [2024-11-19 13:19:52.862333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.618 [2024-11-19 13:19:52.862340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.618 [2024-11-19 13:19:52.862347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.618 [2024-11-19 13:19:52.862361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-11-19 13:19:52.872290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.618 [2024-11-19 13:19:52.872344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.618 [2024-11-19 13:19:52.872357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.618 [2024-11-19 13:19:52.872364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.618 [2024-11-19 13:19:52.872370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.618 [2024-11-19 13:19:52.872385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.618 qpair failed and we were unable to recover it. 00:27:49.618 [2024-11-19 13:19:52.882341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.618 [2024-11-19 13:19:52.882399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.618 [2024-11-19 13:19:52.882412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.618 [2024-11-19 13:19:52.882419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.618 [2024-11-19 13:19:52.882425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.619 [2024-11-19 13:19:52.882440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-11-19 13:19:52.892362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.619 [2024-11-19 13:19:52.892419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.619 [2024-11-19 13:19:52.892433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.619 [2024-11-19 13:19:52.892440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.619 [2024-11-19 13:19:52.892446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.619 [2024-11-19 13:19:52.892464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-11-19 13:19:52.902384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.619 [2024-11-19 13:19:52.902437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.619 [2024-11-19 13:19:52.902451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.619 [2024-11-19 13:19:52.902458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.619 [2024-11-19 13:19:52.902464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.619 [2024-11-19 13:19:52.902478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-11-19 13:19:52.912420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.619 [2024-11-19 13:19:52.912478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.619 [2024-11-19 13:19:52.912491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.619 [2024-11-19 13:19:52.912498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.619 [2024-11-19 13:19:52.912504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.619 [2024-11-19 13:19:52.912519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-11-19 13:19:52.922515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.619 [2024-11-19 13:19:52.922620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.619 [2024-11-19 13:19:52.922633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.619 [2024-11-19 13:19:52.922640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.619 [2024-11-19 13:19:52.922646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.619 [2024-11-19 13:19:52.922661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-11-19 13:19:52.932487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.619 [2024-11-19 13:19:52.932543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.619 [2024-11-19 13:19:52.932557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.619 [2024-11-19 13:19:52.932563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.619 [2024-11-19 13:19:52.932569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.619 [2024-11-19 13:19:52.932583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-11-19 13:19:52.942501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.619 [2024-11-19 13:19:52.942561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.619 [2024-11-19 13:19:52.942575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.619 [2024-11-19 13:19:52.942582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.619 [2024-11-19 13:19:52.942587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.619 [2024-11-19 13:19:52.942602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-11-19 13:19:52.952465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.619 [2024-11-19 13:19:52.952521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.619 [2024-11-19 13:19:52.952534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.619 [2024-11-19 13:19:52.952541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.619 [2024-11-19 13:19:52.952547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.619 [2024-11-19 13:19:52.952562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-11-19 13:19:52.962558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.619 [2024-11-19 13:19:52.962615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.619 [2024-11-19 13:19:52.962629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.619 [2024-11-19 13:19:52.962636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.619 [2024-11-19 13:19:52.962642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.619 [2024-11-19 13:19:52.962657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-11-19 13:19:52.972573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.619 [2024-11-19 13:19:52.972627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.619 [2024-11-19 13:19:52.972640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.619 [2024-11-19 13:19:52.972647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.619 [2024-11-19 13:19:52.972653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.619 [2024-11-19 13:19:52.972667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.619 [2024-11-19 13:19:52.982556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.619 [2024-11-19 13:19:52.982607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.619 [2024-11-19 13:19:52.982625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.619 [2024-11-19 13:19:52.982632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.619 [2024-11-19 13:19:52.982638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.619 [2024-11-19 13:19:52.982652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.619 qpair failed and we were unable to recover it. 00:27:49.880 [2024-11-19 13:19:52.992679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.880 [2024-11-19 13:19:52.992730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.880 [2024-11-19 13:19:52.992744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.880 [2024-11-19 13:19:52.992750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.880 [2024-11-19 13:19:52.992757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.880 [2024-11-19 13:19:52.992772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.880 qpair failed and we were unable to recover it. 00:27:49.880 [2024-11-19 13:19:53.002687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.880 [2024-11-19 13:19:53.002743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.880 [2024-11-19 13:19:53.002757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.880 [2024-11-19 13:19:53.002764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.880 [2024-11-19 13:19:53.002770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.880 [2024-11-19 13:19:53.002785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.880 qpair failed and we were unable to recover it. 00:27:49.880 [2024-11-19 13:19:53.012716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.880 [2024-11-19 13:19:53.012786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.880 [2024-11-19 13:19:53.012800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.880 [2024-11-19 13:19:53.012806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.880 [2024-11-19 13:19:53.012812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.880 [2024-11-19 13:19:53.012826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.880 qpair failed and we were unable to recover it. 00:27:49.880 [2024-11-19 13:19:53.022714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.880 [2024-11-19 13:19:53.022766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.880 [2024-11-19 13:19:53.022780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.880 [2024-11-19 13:19:53.022786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.880 [2024-11-19 13:19:53.022795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.880 [2024-11-19 13:19:53.022810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.880 qpair failed and we were unable to recover it. 00:27:49.880 [2024-11-19 13:19:53.032756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.880 [2024-11-19 13:19:53.032810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.880 [2024-11-19 13:19:53.032823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.880 [2024-11-19 13:19:53.032830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.880 [2024-11-19 13:19:53.032836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.880 [2024-11-19 13:19:53.032850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.880 qpair failed and we were unable to recover it. 00:27:49.880 [2024-11-19 13:19:53.042814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.880 [2024-11-19 13:19:53.042870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.880 [2024-11-19 13:19:53.042883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.880 [2024-11-19 13:19:53.042890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.880 [2024-11-19 13:19:53.042896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.880 [2024-11-19 13:19:53.042910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.880 qpair failed and we were unable to recover it. 00:27:49.880 [2024-11-19 13:19:53.052818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.880 [2024-11-19 13:19:53.052875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.881 [2024-11-19 13:19:53.052889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.881 [2024-11-19 13:19:53.052895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.881 [2024-11-19 13:19:53.052901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.881 [2024-11-19 13:19:53.052916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.881 qpair failed and we were unable to recover it. 00:27:49.881 [2024-11-19 13:19:53.062816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.881 [2024-11-19 13:19:53.062876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.881 [2024-11-19 13:19:53.062889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.881 [2024-11-19 13:19:53.062896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.881 [2024-11-19 13:19:53.062902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.881 [2024-11-19 13:19:53.062916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.881 qpair failed and we were unable to recover it. 00:27:49.881 [2024-11-19 13:19:53.072793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.881 [2024-11-19 13:19:53.072842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.881 [2024-11-19 13:19:53.072855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.881 [2024-11-19 13:19:53.072861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.881 [2024-11-19 13:19:53.072868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.881 [2024-11-19 13:19:53.072882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.881 qpair failed and we were unable to recover it. 00:27:49.881 [2024-11-19 13:19:53.082884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.881 [2024-11-19 13:19:53.082940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.881 [2024-11-19 13:19:53.082958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.881 [2024-11-19 13:19:53.082965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.881 [2024-11-19 13:19:53.082970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.881 [2024-11-19 13:19:53.082985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.881 qpair failed and we were unable to recover it. 00:27:49.881 [2024-11-19 13:19:53.092857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.881 [2024-11-19 13:19:53.092917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.881 [2024-11-19 13:19:53.092931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.881 [2024-11-19 13:19:53.092938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.881 [2024-11-19 13:19:53.092943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.881 [2024-11-19 13:19:53.092962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.881 qpair failed and we were unable to recover it. 00:27:49.881 [2024-11-19 13:19:53.102952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.881 [2024-11-19 13:19:53.103009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.881 [2024-11-19 13:19:53.103022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.881 [2024-11-19 13:19:53.103028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.881 [2024-11-19 13:19:53.103034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.881 [2024-11-19 13:19:53.103049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.881 qpair failed and we were unable to recover it. 00:27:49.881 [2024-11-19 13:19:53.112920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.881 [2024-11-19 13:19:53.112973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.881 [2024-11-19 13:19:53.112991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.881 [2024-11-19 13:19:53.112997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.881 [2024-11-19 13:19:53.113003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.881 [2024-11-19 13:19:53.113018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.881 qpair failed and we were unable to recover it. 00:27:49.881 [2024-11-19 13:19:53.123018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.881 [2024-11-19 13:19:53.123077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.881 [2024-11-19 13:19:53.123090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.881 [2024-11-19 13:19:53.123097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.881 [2024-11-19 13:19:53.123103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.881 [2024-11-19 13:19:53.123119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.881 qpair failed and we were unable to recover it. 00:27:49.881 [2024-11-19 13:19:53.133030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.881 [2024-11-19 13:19:53.133081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.881 [2024-11-19 13:19:53.133094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.881 [2024-11-19 13:19:53.133101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.881 [2024-11-19 13:19:53.133107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.881 [2024-11-19 13:19:53.133121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.881 qpair failed and we were unable to recover it. 00:27:49.881 [2024-11-19 13:19:53.143005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.881 [2024-11-19 13:19:53.143076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.881 [2024-11-19 13:19:53.143090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.881 [2024-11-19 13:19:53.143097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.881 [2024-11-19 13:19:53.143103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.881 [2024-11-19 13:19:53.143117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.881 qpair failed and we were unable to recover it. 00:27:49.881 [2024-11-19 13:19:53.153114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.881 [2024-11-19 13:19:53.153175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.881 [2024-11-19 13:19:53.153189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.881 [2024-11-19 13:19:53.153199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.881 [2024-11-19 13:19:53.153205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.881 [2024-11-19 13:19:53.153220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.881 qpair failed and we were unable to recover it. 00:27:49.881 [2024-11-19 13:19:53.163122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.881 [2024-11-19 13:19:53.163199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.881 [2024-11-19 13:19:53.163212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.881 [2024-11-19 13:19:53.163220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.881 [2024-11-19 13:19:53.163225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.881 [2024-11-19 13:19:53.163241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.881 qpair failed and we were unable to recover it. 00:27:49.881 [2024-11-19 13:19:53.173154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.881 [2024-11-19 13:19:53.173241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.881 [2024-11-19 13:19:53.173254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.881 [2024-11-19 13:19:53.173261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.881 [2024-11-19 13:19:53.173267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.881 [2024-11-19 13:19:53.173281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.881 qpair failed and we were unable to recover it. 00:27:49.881 [2024-11-19 13:19:53.183146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.882 [2024-11-19 13:19:53.183199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.882 [2024-11-19 13:19:53.183213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.882 [2024-11-19 13:19:53.183220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.882 [2024-11-19 13:19:53.183226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.882 [2024-11-19 13:19:53.183240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.882 qpair failed and we were unable to recover it. 00:27:49.882 [2024-11-19 13:19:53.193145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.882 [2024-11-19 13:19:53.193201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.882 [2024-11-19 13:19:53.193214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.882 [2024-11-19 13:19:53.193222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.882 [2024-11-19 13:19:53.193228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.882 [2024-11-19 13:19:53.193242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.882 qpair failed and we were unable to recover it. 00:27:49.882 [2024-11-19 13:19:53.203179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.882 [2024-11-19 13:19:53.203243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.882 [2024-11-19 13:19:53.203257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.882 [2024-11-19 13:19:53.203264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.882 [2024-11-19 13:19:53.203269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.882 [2024-11-19 13:19:53.203285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.882 qpair failed and we were unable to recover it. 00:27:49.882 [2024-11-19 13:19:53.213215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.882 [2024-11-19 13:19:53.213269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.882 [2024-11-19 13:19:53.213282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.882 [2024-11-19 13:19:53.213289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.882 [2024-11-19 13:19:53.213295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.882 [2024-11-19 13:19:53.213309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.882 qpair failed and we were unable to recover it. 00:27:49.882 [2024-11-19 13:19:53.223295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.882 [2024-11-19 13:19:53.223346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.882 [2024-11-19 13:19:53.223359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.882 [2024-11-19 13:19:53.223366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.882 [2024-11-19 13:19:53.223373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.882 [2024-11-19 13:19:53.223388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.882 qpair failed and we were unable to recover it. 00:27:49.882 [2024-11-19 13:19:53.233320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.882 [2024-11-19 13:19:53.233371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.882 [2024-11-19 13:19:53.233385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.882 [2024-11-19 13:19:53.233391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.882 [2024-11-19 13:19:53.233397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.882 [2024-11-19 13:19:53.233412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.882 qpair failed and we were unable to recover it. 00:27:49.882 [2024-11-19 13:19:53.243383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.882 [2024-11-19 13:19:53.243448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.882 [2024-11-19 13:19:53.243461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.882 [2024-11-19 13:19:53.243468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.882 [2024-11-19 13:19:53.243474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.882 [2024-11-19 13:19:53.243489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.882 qpair failed and we were unable to recover it. 00:27:49.882 [2024-11-19 13:19:53.253407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.882 [2024-11-19 13:19:53.253459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.882 [2024-11-19 13:19:53.253473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.882 [2024-11-19 13:19:53.253480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.882 [2024-11-19 13:19:53.253486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:49.882 [2024-11-19 13:19:53.253501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:49.882 qpair failed and we were unable to recover it. 00:27:50.142 [2024-11-19 13:19:53.263417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.142 [2024-11-19 13:19:53.263472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.142 [2024-11-19 13:19:53.263486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.142 [2024-11-19 13:19:53.263493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.142 [2024-11-19 13:19:53.263499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.142 [2024-11-19 13:19:53.263514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.142 qpair failed and we were unable to recover it. 00:27:50.142 [2024-11-19 13:19:53.273428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.142 [2024-11-19 13:19:53.273479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.142 [2024-11-19 13:19:53.273495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.142 [2024-11-19 13:19:53.273501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.142 [2024-11-19 13:19:53.273508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.142 [2024-11-19 13:19:53.273522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.142 qpair failed and we were unable to recover it. 00:27:50.142 [2024-11-19 13:19:53.283487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.142 [2024-11-19 13:19:53.283566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.142 [2024-11-19 13:19:53.283580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.142 [2024-11-19 13:19:53.283590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.142 [2024-11-19 13:19:53.283596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.142 [2024-11-19 13:19:53.283610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.142 qpair failed and we were unable to recover it. 00:27:50.142 [2024-11-19 13:19:53.293487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.142 [2024-11-19 13:19:53.293539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.142 [2024-11-19 13:19:53.293553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.142 [2024-11-19 13:19:53.293560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.142 [2024-11-19 13:19:53.293566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.142 [2024-11-19 13:19:53.293580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.142 qpair failed and we were unable to recover it. 00:27:50.142 [2024-11-19 13:19:53.303457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.142 [2024-11-19 13:19:53.303535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.142 [2024-11-19 13:19:53.303548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.142 [2024-11-19 13:19:53.303555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.142 [2024-11-19 13:19:53.303561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.142 [2024-11-19 13:19:53.303576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.142 qpair failed and we were unable to recover it. 00:27:50.142 [2024-11-19 13:19:53.313557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.142 [2024-11-19 13:19:53.313610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.142 [2024-11-19 13:19:53.313624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.142 [2024-11-19 13:19:53.313631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.142 [2024-11-19 13:19:53.313637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.142 [2024-11-19 13:19:53.313651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.142 qpair failed and we were unable to recover it. 00:27:50.142 [2024-11-19 13:19:53.323568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.142 [2024-11-19 13:19:53.323654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.142 [2024-11-19 13:19:53.323668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.142 [2024-11-19 13:19:53.323674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.142 [2024-11-19 13:19:53.323680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.142 [2024-11-19 13:19:53.323698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.142 qpair failed and we were unable to recover it. 00:27:50.142 [2024-11-19 13:19:53.333609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.142 [2024-11-19 13:19:53.333664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.142 [2024-11-19 13:19:53.333678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.142 [2024-11-19 13:19:53.333685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.142 [2024-11-19 13:19:53.333692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.142 [2024-11-19 13:19:53.333706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.142 qpair failed and we were unable to recover it. 00:27:50.143 [2024-11-19 13:19:53.343632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.143 [2024-11-19 13:19:53.343721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.143 [2024-11-19 13:19:53.343734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.143 [2024-11-19 13:19:53.343741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.143 [2024-11-19 13:19:53.343747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.143 [2024-11-19 13:19:53.343761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.143 qpair failed and we were unable to recover it. 00:27:50.143 [2024-11-19 13:19:53.353677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.143 [2024-11-19 13:19:53.353734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.143 [2024-11-19 13:19:53.353748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.143 [2024-11-19 13:19:53.353755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.143 [2024-11-19 13:19:53.353760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.143 [2024-11-19 13:19:53.353775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.143 qpair failed and we were unable to recover it. 00:27:50.143 [2024-11-19 13:19:53.363730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.143 [2024-11-19 13:19:53.363832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.143 [2024-11-19 13:19:53.363845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.143 [2024-11-19 13:19:53.363851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.143 [2024-11-19 13:19:53.363857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.143 [2024-11-19 13:19:53.363872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.143 qpair failed and we were unable to recover it. 00:27:50.143 [2024-11-19 13:19:53.373727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.143 [2024-11-19 13:19:53.373784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.143 [2024-11-19 13:19:53.373798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.143 [2024-11-19 13:19:53.373806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.143 [2024-11-19 13:19:53.373812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.143 [2024-11-19 13:19:53.373826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.143 qpair failed and we were unable to recover it. 00:27:50.143 [2024-11-19 13:19:53.383751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.143 [2024-11-19 13:19:53.383803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.143 [2024-11-19 13:19:53.383817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.143 [2024-11-19 13:19:53.383823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.143 [2024-11-19 13:19:53.383829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.143 [2024-11-19 13:19:53.383844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.143 qpair failed and we were unable to recover it. 00:27:50.143 [2024-11-19 13:19:53.393779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.143 [2024-11-19 13:19:53.393832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.143 [2024-11-19 13:19:53.393846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.143 [2024-11-19 13:19:53.393852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.143 [2024-11-19 13:19:53.393858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.143 [2024-11-19 13:19:53.393873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.143 qpair failed and we were unable to recover it. 00:27:50.143 [2024-11-19 13:19:53.403816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.143 [2024-11-19 13:19:53.403876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.143 [2024-11-19 13:19:53.403889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.143 [2024-11-19 13:19:53.403896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.143 [2024-11-19 13:19:53.403902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.143 [2024-11-19 13:19:53.403917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.143 qpair failed and we were unable to recover it. 00:27:50.143 [2024-11-19 13:19:53.413862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.143 [2024-11-19 13:19:53.413920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.143 [2024-11-19 13:19:53.413937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.143 [2024-11-19 13:19:53.413944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.143 [2024-11-19 13:19:53.413954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.143 [2024-11-19 13:19:53.413969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.143 qpair failed and we were unable to recover it. 00:27:50.143 [2024-11-19 13:19:53.423892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.143 [2024-11-19 13:19:53.423941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.143 [2024-11-19 13:19:53.423958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.143 [2024-11-19 13:19:53.423964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.143 [2024-11-19 13:19:53.423971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.143 [2024-11-19 13:19:53.423985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.143 qpair failed and we were unable to recover it. 00:27:50.143 [2024-11-19 13:19:53.433845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.143 [2024-11-19 13:19:53.433932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.143 [2024-11-19 13:19:53.433945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.143 [2024-11-19 13:19:53.433957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.143 [2024-11-19 13:19:53.433963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.143 [2024-11-19 13:19:53.433978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.143 qpair failed and we were unable to recover it. 00:27:50.143 [2024-11-19 13:19:53.443963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.143 [2024-11-19 13:19:53.444036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.143 [2024-11-19 13:19:53.444049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.143 [2024-11-19 13:19:53.444056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.143 [2024-11-19 13:19:53.444062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.143 [2024-11-19 13:19:53.444077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.143 qpair failed and we were unable to recover it. 00:27:50.143 [2024-11-19 13:19:53.453992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.143 [2024-11-19 13:19:53.454045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.143 [2024-11-19 13:19:53.454058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.143 [2024-11-19 13:19:53.454065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.143 [2024-11-19 13:19:53.454074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.143 [2024-11-19 13:19:53.454089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.143 qpair failed and we were unable to recover it. 00:27:50.143 [2024-11-19 13:19:53.463993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.143 [2024-11-19 13:19:53.464047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.143 [2024-11-19 13:19:53.464061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.143 [2024-11-19 13:19:53.464068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.143 [2024-11-19 13:19:53.464073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.143 [2024-11-19 13:19:53.464088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.143 qpair failed and we were unable to recover it. 00:27:50.143 [2024-11-19 13:19:53.474018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.143 [2024-11-19 13:19:53.474074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.143 [2024-11-19 13:19:53.474087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.143 [2024-11-19 13:19:53.474094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.143 [2024-11-19 13:19:53.474100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.143 [2024-11-19 13:19:53.474115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.143 qpair failed and we were unable to recover it. 00:27:50.143 [2024-11-19 13:19:53.484073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.143 [2024-11-19 13:19:53.484130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.143 [2024-11-19 13:19:53.484144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.143 [2024-11-19 13:19:53.484150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.143 [2024-11-19 13:19:53.484156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.143 [2024-11-19 13:19:53.484170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.143 qpair failed and we were unable to recover it. 00:27:50.143 [2024-11-19 13:19:53.494083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.143 [2024-11-19 13:19:53.494137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.143 [2024-11-19 13:19:53.494151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.143 [2024-11-19 13:19:53.494158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.143 [2024-11-19 13:19:53.494164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.143 [2024-11-19 13:19:53.494178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.144 qpair failed and we were unable to recover it. 00:27:50.144 [2024-11-19 13:19:53.504132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.144 [2024-11-19 13:19:53.504187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.144 [2024-11-19 13:19:53.504200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.144 [2024-11-19 13:19:53.504207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.144 [2024-11-19 13:19:53.504213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.144 [2024-11-19 13:19:53.504227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.144 qpair failed and we were unable to recover it. 00:27:50.144 [2024-11-19 13:19:53.514148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.144 [2024-11-19 13:19:53.514203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.144 [2024-11-19 13:19:53.514217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.144 [2024-11-19 13:19:53.514224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.144 [2024-11-19 13:19:53.514230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.144 [2024-11-19 13:19:53.514244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.144 qpair failed and we were unable to recover it. 00:27:50.404 [2024-11-19 13:19:53.524179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.404 [2024-11-19 13:19:53.524235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.404 [2024-11-19 13:19:53.524248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.404 [2024-11-19 13:19:53.524255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.404 [2024-11-19 13:19:53.524262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.404 [2024-11-19 13:19:53.524276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.404 qpair failed and we were unable to recover it. 00:27:50.404 [2024-11-19 13:19:53.534217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.404 [2024-11-19 13:19:53.534272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.404 [2024-11-19 13:19:53.534285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.404 [2024-11-19 13:19:53.534292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.404 [2024-11-19 13:19:53.534298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.404 [2024-11-19 13:19:53.534312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.404 qpair failed and we were unable to recover it. 00:27:50.404 [2024-11-19 13:19:53.544253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.404 [2024-11-19 13:19:53.544303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.404 [2024-11-19 13:19:53.544319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.404 [2024-11-19 13:19:53.544326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.404 [2024-11-19 13:19:53.544332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.404 [2024-11-19 13:19:53.544346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.404 qpair failed and we were unable to recover it. 00:27:50.404 [2024-11-19 13:19:53.554265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.404 [2024-11-19 13:19:53.554317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.404 [2024-11-19 13:19:53.554330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.404 [2024-11-19 13:19:53.554337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.404 [2024-11-19 13:19:53.554343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.404 [2024-11-19 13:19:53.554357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.404 qpair failed and we were unable to recover it. 00:27:50.404 [2024-11-19 13:19:53.564303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.404 [2024-11-19 13:19:53.564358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.404 [2024-11-19 13:19:53.564372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.404 [2024-11-19 13:19:53.564379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.404 [2024-11-19 13:19:53.564385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.404 [2024-11-19 13:19:53.564399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.404 qpair failed and we were unable to recover it. 00:27:50.404 [2024-11-19 13:19:53.574324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.404 [2024-11-19 13:19:53.574379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.404 [2024-11-19 13:19:53.574392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.404 [2024-11-19 13:19:53.574399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.404 [2024-11-19 13:19:53.574405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.404 [2024-11-19 13:19:53.574420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.404 qpair failed and we were unable to recover it. 00:27:50.404 [2024-11-19 13:19:53.584346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.404 [2024-11-19 13:19:53.584406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.404 [2024-11-19 13:19:53.584420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.404 [2024-11-19 13:19:53.584426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.405 [2024-11-19 13:19:53.584435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.405 [2024-11-19 13:19:53.584450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.405 qpair failed and we were unable to recover it. 00:27:50.405 [2024-11-19 13:19:53.594357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.405 [2024-11-19 13:19:53.594409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.405 [2024-11-19 13:19:53.594422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.405 [2024-11-19 13:19:53.594428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.405 [2024-11-19 13:19:53.594434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.405 [2024-11-19 13:19:53.594449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.405 qpair failed and we were unable to recover it. 00:27:50.405 [2024-11-19 13:19:53.604356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.405 [2024-11-19 13:19:53.604410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.405 [2024-11-19 13:19:53.604424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.405 [2024-11-19 13:19:53.604430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.405 [2024-11-19 13:19:53.604436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.405 [2024-11-19 13:19:53.604451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.405 qpair failed and we were unable to recover it. 00:27:50.405 [2024-11-19 13:19:53.614474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.405 [2024-11-19 13:19:53.614557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.405 [2024-11-19 13:19:53.614571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.405 [2024-11-19 13:19:53.614578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.405 [2024-11-19 13:19:53.614584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.405 [2024-11-19 13:19:53.614598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.405 qpair failed and we were unable to recover it. 00:27:50.405 [2024-11-19 13:19:53.624486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.405 [2024-11-19 13:19:53.624537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.405 [2024-11-19 13:19:53.624550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.405 [2024-11-19 13:19:53.624557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.405 [2024-11-19 13:19:53.624562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.405 [2024-11-19 13:19:53.624577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.405 qpair failed and we were unable to recover it. 00:27:50.405 [2024-11-19 13:19:53.634533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.405 [2024-11-19 13:19:53.634601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.405 [2024-11-19 13:19:53.634614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.405 [2024-11-19 13:19:53.634621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.405 [2024-11-19 13:19:53.634626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.405 [2024-11-19 13:19:53.634641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.405 qpair failed and we were unable to recover it. 00:27:50.405 [2024-11-19 13:19:53.644547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.405 [2024-11-19 13:19:53.644602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.405 [2024-11-19 13:19:53.644615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.405 [2024-11-19 13:19:53.644622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.405 [2024-11-19 13:19:53.644628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.405 [2024-11-19 13:19:53.644642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.405 qpair failed and we were unable to recover it. 00:27:50.405 [2024-11-19 13:19:53.654574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.405 [2024-11-19 13:19:53.654626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.405 [2024-11-19 13:19:53.654639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.405 [2024-11-19 13:19:53.654645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.405 [2024-11-19 13:19:53.654652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.405 [2024-11-19 13:19:53.654666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.405 qpair failed and we were unable to recover it. 00:27:50.405 [2024-11-19 13:19:53.664589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.405 [2024-11-19 13:19:53.664639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.405 [2024-11-19 13:19:53.664652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.405 [2024-11-19 13:19:53.664658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.405 [2024-11-19 13:19:53.664664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.405 [2024-11-19 13:19:53.664679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.405 qpair failed and we were unable to recover it. 00:27:50.405 [2024-11-19 13:19:53.674619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.405 [2024-11-19 13:19:53.674671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.405 [2024-11-19 13:19:53.674687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.405 [2024-11-19 13:19:53.674694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.405 [2024-11-19 13:19:53.674700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.405 [2024-11-19 13:19:53.674714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.405 qpair failed and we were unable to recover it. 00:27:50.405 [2024-11-19 13:19:53.684652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.405 [2024-11-19 13:19:53.684707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.405 [2024-11-19 13:19:53.684721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.405 [2024-11-19 13:19:53.684728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.405 [2024-11-19 13:19:53.684733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.405 [2024-11-19 13:19:53.684748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.405 qpair failed and we were unable to recover it. 00:27:50.405 [2024-11-19 13:19:53.694675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.405 [2024-11-19 13:19:53.694731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.405 [2024-11-19 13:19:53.694745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.405 [2024-11-19 13:19:53.694751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.405 [2024-11-19 13:19:53.694757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.405 [2024-11-19 13:19:53.694772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.405 qpair failed and we were unable to recover it. 00:27:50.405 [2024-11-19 13:19:53.704697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.405 [2024-11-19 13:19:53.704747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.405 [2024-11-19 13:19:53.704760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.405 [2024-11-19 13:19:53.704767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.405 [2024-11-19 13:19:53.704773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.405 [2024-11-19 13:19:53.704788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.405 qpair failed and we were unable to recover it. 00:27:50.405 [2024-11-19 13:19:53.714715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.405 [2024-11-19 13:19:53.714812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.405 [2024-11-19 13:19:53.714826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.405 [2024-11-19 13:19:53.714836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.406 [2024-11-19 13:19:53.714843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.406 [2024-11-19 13:19:53.714857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.406 qpair failed and we were unable to recover it. 00:27:50.406 [2024-11-19 13:19:53.724774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.406 [2024-11-19 13:19:53.724833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.406 [2024-11-19 13:19:53.724847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.406 [2024-11-19 13:19:53.724854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.406 [2024-11-19 13:19:53.724860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.406 [2024-11-19 13:19:53.724875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.406 qpair failed and we were unable to recover it. 00:27:50.406 [2024-11-19 13:19:53.734801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.406 [2024-11-19 13:19:53.734850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.406 [2024-11-19 13:19:53.734864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.406 [2024-11-19 13:19:53.734870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.406 [2024-11-19 13:19:53.734876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.406 [2024-11-19 13:19:53.734891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.406 qpair failed and we were unable to recover it. 00:27:50.406 [2024-11-19 13:19:53.744762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.406 [2024-11-19 13:19:53.744816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.406 [2024-11-19 13:19:53.744830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.406 [2024-11-19 13:19:53.744836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.406 [2024-11-19 13:19:53.744842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.406 [2024-11-19 13:19:53.744856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.406 qpair failed and we were unable to recover it. 00:27:50.406 [2024-11-19 13:19:53.754857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.406 [2024-11-19 13:19:53.754911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.406 [2024-11-19 13:19:53.754924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.406 [2024-11-19 13:19:53.754931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.406 [2024-11-19 13:19:53.754937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.406 [2024-11-19 13:19:53.754955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.406 qpair failed and we were unable to recover it. 00:27:50.406 [2024-11-19 13:19:53.764893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.406 [2024-11-19 13:19:53.764962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.406 [2024-11-19 13:19:53.764976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.406 [2024-11-19 13:19:53.764982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.406 [2024-11-19 13:19:53.764988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.406 [2024-11-19 13:19:53.765003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.406 qpair failed and we were unable to recover it. 00:27:50.406 [2024-11-19 13:19:53.774926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.406 [2024-11-19 13:19:53.774984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.406 [2024-11-19 13:19:53.774998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.406 [2024-11-19 13:19:53.775005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.406 [2024-11-19 13:19:53.775012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.406 [2024-11-19 13:19:53.775026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.406 qpair failed and we were unable to recover it. 00:27:50.666 [2024-11-19 13:19:53.784954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.666 [2024-11-19 13:19:53.785046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.666 [2024-11-19 13:19:53.785060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.666 [2024-11-19 13:19:53.785066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.666 [2024-11-19 13:19:53.785073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.666 [2024-11-19 13:19:53.785088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.666 qpair failed and we were unable to recover it. 00:27:50.666 [2024-11-19 13:19:53.794969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.666 [2024-11-19 13:19:53.795026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.666 [2024-11-19 13:19:53.795039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.666 [2024-11-19 13:19:53.795046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.666 [2024-11-19 13:19:53.795052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.666 [2024-11-19 13:19:53.795067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.666 qpair failed and we were unable to recover it. 00:27:50.666 [2024-11-19 13:19:53.805006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.666 [2024-11-19 13:19:53.805068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.666 [2024-11-19 13:19:53.805081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.666 [2024-11-19 13:19:53.805089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.666 [2024-11-19 13:19:53.805094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.666 [2024-11-19 13:19:53.805110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.666 qpair failed and we were unable to recover it. 00:27:50.666 [2024-11-19 13:19:53.815029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.666 [2024-11-19 13:19:53.815082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.666 [2024-11-19 13:19:53.815096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.666 [2024-11-19 13:19:53.815103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.666 [2024-11-19 13:19:53.815109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.666 [2024-11-19 13:19:53.815124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.666 qpair failed and we were unable to recover it. 00:27:50.666 [2024-11-19 13:19:53.825089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.666 [2024-11-19 13:19:53.825137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.666 [2024-11-19 13:19:53.825150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.666 [2024-11-19 13:19:53.825157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.666 [2024-11-19 13:19:53.825163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.666 [2024-11-19 13:19:53.825177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.666 qpair failed and we were unable to recover it. 00:27:50.666 [2024-11-19 13:19:53.835105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.666 [2024-11-19 13:19:53.835165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.666 [2024-11-19 13:19:53.835179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.666 [2024-11-19 13:19:53.835186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.666 [2024-11-19 13:19:53.835192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.666 [2024-11-19 13:19:53.835207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.666 qpair failed and we were unable to recover it. 00:27:50.666 [2024-11-19 13:19:53.845121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.667 [2024-11-19 13:19:53.845176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.667 [2024-11-19 13:19:53.845190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.667 [2024-11-19 13:19:53.845200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.667 [2024-11-19 13:19:53.845206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.667 [2024-11-19 13:19:53.845221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.667 qpair failed and we were unable to recover it. 00:27:50.667 [2024-11-19 13:19:53.855162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.667 [2024-11-19 13:19:53.855216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.667 [2024-11-19 13:19:53.855229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.667 [2024-11-19 13:19:53.855236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.667 [2024-11-19 13:19:53.855242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.667 [2024-11-19 13:19:53.855257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.667 qpair failed and we were unable to recover it. 00:27:50.667 [2024-11-19 13:19:53.865167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.667 [2024-11-19 13:19:53.865220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.667 [2024-11-19 13:19:53.865233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.667 [2024-11-19 13:19:53.865240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.667 [2024-11-19 13:19:53.865246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.667 [2024-11-19 13:19:53.865261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.667 qpair failed and we were unable to recover it. 00:27:50.667 [2024-11-19 13:19:53.875235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.667 [2024-11-19 13:19:53.875293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.667 [2024-11-19 13:19:53.875306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.667 [2024-11-19 13:19:53.875313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.667 [2024-11-19 13:19:53.875320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.667 [2024-11-19 13:19:53.875335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.667 qpair failed and we were unable to recover it. 00:27:50.667 [2024-11-19 13:19:53.885242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.667 [2024-11-19 13:19:53.885298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.667 [2024-11-19 13:19:53.885311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.667 [2024-11-19 13:19:53.885318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.667 [2024-11-19 13:19:53.885324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.667 [2024-11-19 13:19:53.885342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.667 qpair failed and we were unable to recover it. 00:27:50.667 [2024-11-19 13:19:53.895276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.667 [2024-11-19 13:19:53.895330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.667 [2024-11-19 13:19:53.895344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.667 [2024-11-19 13:19:53.895350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.667 [2024-11-19 13:19:53.895356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.667 [2024-11-19 13:19:53.895371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.667 qpair failed and we were unable to recover it. 00:27:50.667 [2024-11-19 13:19:53.905274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.667 [2024-11-19 13:19:53.905327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.667 [2024-11-19 13:19:53.905340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.667 [2024-11-19 13:19:53.905347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.667 [2024-11-19 13:19:53.905353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.667 [2024-11-19 13:19:53.905368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.667 qpair failed and we were unable to recover it. 00:27:50.667 [2024-11-19 13:19:53.915302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.667 [2024-11-19 13:19:53.915356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.667 [2024-11-19 13:19:53.915369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.667 [2024-11-19 13:19:53.915376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.667 [2024-11-19 13:19:53.915382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.667 [2024-11-19 13:19:53.915397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.667 qpair failed and we were unable to recover it. 00:27:50.667 [2024-11-19 13:19:53.925345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.667 [2024-11-19 13:19:53.925403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.667 [2024-11-19 13:19:53.925416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.667 [2024-11-19 13:19:53.925423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.667 [2024-11-19 13:19:53.925429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.667 [2024-11-19 13:19:53.925443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.667 qpair failed and we were unable to recover it. 00:27:50.667 [2024-11-19 13:19:53.935295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.667 [2024-11-19 13:19:53.935350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.667 [2024-11-19 13:19:53.935364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.667 [2024-11-19 13:19:53.935371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.667 [2024-11-19 13:19:53.935377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.667 [2024-11-19 13:19:53.935391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.667 qpair failed and we were unable to recover it. 00:27:50.667 [2024-11-19 13:19:53.945395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.667 [2024-11-19 13:19:53.945448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.667 [2024-11-19 13:19:53.945461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.667 [2024-11-19 13:19:53.945468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.667 [2024-11-19 13:19:53.945474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.667 [2024-11-19 13:19:53.945488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.667 qpair failed and we were unable to recover it. 00:27:50.667 [2024-11-19 13:19:53.955425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.667 [2024-11-19 13:19:53.955476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.667 [2024-11-19 13:19:53.955489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.667 [2024-11-19 13:19:53.955496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.667 [2024-11-19 13:19:53.955502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.667 [2024-11-19 13:19:53.955517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.667 qpair failed and we were unable to recover it. 00:27:50.667 [2024-11-19 13:19:53.965417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.667 [2024-11-19 13:19:53.965474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.667 [2024-11-19 13:19:53.965487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.667 [2024-11-19 13:19:53.965494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.667 [2024-11-19 13:19:53.965500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.668 [2024-11-19 13:19:53.965514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.668 qpair failed and we were unable to recover it. 00:27:50.668 [2024-11-19 13:19:53.975476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.668 [2024-11-19 13:19:53.975531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.668 [2024-11-19 13:19:53.975548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.668 [2024-11-19 13:19:53.975554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.668 [2024-11-19 13:19:53.975560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.668 [2024-11-19 13:19:53.975575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.668 qpair failed and we were unable to recover it. 00:27:50.668 [2024-11-19 13:19:53.985509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.668 [2024-11-19 13:19:53.985562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.668 [2024-11-19 13:19:53.985576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.668 [2024-11-19 13:19:53.985582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.668 [2024-11-19 13:19:53.985588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.668 [2024-11-19 13:19:53.985602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.668 qpair failed and we were unable to recover it. 00:27:50.668 [2024-11-19 13:19:53.995552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.668 [2024-11-19 13:19:53.995603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.668 [2024-11-19 13:19:53.995617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.668 [2024-11-19 13:19:53.995623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.668 [2024-11-19 13:19:53.995629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.668 [2024-11-19 13:19:53.995644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.668 qpair failed and we were unable to recover it. 00:27:50.668 [2024-11-19 13:19:54.005592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.668 [2024-11-19 13:19:54.005647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.668 [2024-11-19 13:19:54.005660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.668 [2024-11-19 13:19:54.005667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.668 [2024-11-19 13:19:54.005673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.668 [2024-11-19 13:19:54.005688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.668 qpair failed and we were unable to recover it. 00:27:50.668 [2024-11-19 13:19:54.015578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.668 [2024-11-19 13:19:54.015629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.668 [2024-11-19 13:19:54.015643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.668 [2024-11-19 13:19:54.015650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.668 [2024-11-19 13:19:54.015659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.668 [2024-11-19 13:19:54.015673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.668 qpair failed and we were unable to recover it. 00:27:50.668 [2024-11-19 13:19:54.025630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.668 [2024-11-19 13:19:54.025682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.668 [2024-11-19 13:19:54.025696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.668 [2024-11-19 13:19:54.025703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.668 [2024-11-19 13:19:54.025709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.668 [2024-11-19 13:19:54.025723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.668 qpair failed and we were unable to recover it. 00:27:50.668 [2024-11-19 13:19:54.035715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.668 [2024-11-19 13:19:54.035771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.668 [2024-11-19 13:19:54.035785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.668 [2024-11-19 13:19:54.035792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.668 [2024-11-19 13:19:54.035798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.668 [2024-11-19 13:19:54.035812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.668 qpair failed and we were unable to recover it. 00:27:50.928 [2024-11-19 13:19:54.045685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.928 [2024-11-19 13:19:54.045740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.928 [2024-11-19 13:19:54.045754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.928 [2024-11-19 13:19:54.045761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.928 [2024-11-19 13:19:54.045767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.928 [2024-11-19 13:19:54.045783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.928 qpair failed and we were unable to recover it. 00:27:50.928 [2024-11-19 13:19:54.055723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.928 [2024-11-19 13:19:54.055777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.928 [2024-11-19 13:19:54.055791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.928 [2024-11-19 13:19:54.055798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.928 [2024-11-19 13:19:54.055804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.928 [2024-11-19 13:19:54.055819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.928 qpair failed and we were unable to recover it. 00:27:50.928 [2024-11-19 13:19:54.065741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.928 [2024-11-19 13:19:54.065796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.928 [2024-11-19 13:19:54.065809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.928 [2024-11-19 13:19:54.065816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.928 [2024-11-19 13:19:54.065822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.928 [2024-11-19 13:19:54.065837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.928 qpair failed and we were unable to recover it. 00:27:50.928 [2024-11-19 13:19:54.075767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.928 [2024-11-19 13:19:54.075836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.928 [2024-11-19 13:19:54.075850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.928 [2024-11-19 13:19:54.075857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.928 [2024-11-19 13:19:54.075863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.928 [2024-11-19 13:19:54.075877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.928 qpair failed and we were unable to recover it. 00:27:50.928 [2024-11-19 13:19:54.085809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.928 [2024-11-19 13:19:54.085864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.928 [2024-11-19 13:19:54.085878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.928 [2024-11-19 13:19:54.085885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.928 [2024-11-19 13:19:54.085891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.928 [2024-11-19 13:19:54.085906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.928 qpair failed and we were unable to recover it. 00:27:50.928 [2024-11-19 13:19:54.095785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.928 [2024-11-19 13:19:54.095840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.928 [2024-11-19 13:19:54.095853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.928 [2024-11-19 13:19:54.095859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.928 [2024-11-19 13:19:54.095866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.928 [2024-11-19 13:19:54.095880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.928 qpair failed and we were unable to recover it. 00:27:50.928 [2024-11-19 13:19:54.105853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.928 [2024-11-19 13:19:54.105904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.928 [2024-11-19 13:19:54.105920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.928 [2024-11-19 13:19:54.105927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.928 [2024-11-19 13:19:54.105933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.928 [2024-11-19 13:19:54.105952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.928 qpair failed and we were unable to recover it. 00:27:50.928 [2024-11-19 13:19:54.115884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.928 [2024-11-19 13:19:54.115938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.928 [2024-11-19 13:19:54.115954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.928 [2024-11-19 13:19:54.115961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.928 [2024-11-19 13:19:54.115967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.928 [2024-11-19 13:19:54.115982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.928 qpair failed and we were unable to recover it. 00:27:50.928 [2024-11-19 13:19:54.125919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.928 [2024-11-19 13:19:54.125978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.928 [2024-11-19 13:19:54.125991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.928 [2024-11-19 13:19:54.126006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.928 [2024-11-19 13:19:54.126014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.928 [2024-11-19 13:19:54.126031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.928 qpair failed and we were unable to recover it. 00:27:50.928 [2024-11-19 13:19:54.135944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.928 [2024-11-19 13:19:54.136004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.928 [2024-11-19 13:19:54.136018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.928 [2024-11-19 13:19:54.136025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.928 [2024-11-19 13:19:54.136031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.928 [2024-11-19 13:19:54.136046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.928 qpair failed and we were unable to recover it. 00:27:50.928 [2024-11-19 13:19:54.145971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.928 [2024-11-19 13:19:54.146026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.929 [2024-11-19 13:19:54.146039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.929 [2024-11-19 13:19:54.146045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.929 [2024-11-19 13:19:54.146054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.929 [2024-11-19 13:19:54.146068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.929 qpair failed and we were unable to recover it. 00:27:50.929 [2024-11-19 13:19:54.156000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.929 [2024-11-19 13:19:54.156052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.929 [2024-11-19 13:19:54.156065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.929 [2024-11-19 13:19:54.156072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.929 [2024-11-19 13:19:54.156077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.929 [2024-11-19 13:19:54.156092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.929 qpair failed and we were unable to recover it. 00:27:50.929 [2024-11-19 13:19:54.166022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.929 [2024-11-19 13:19:54.166078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.929 [2024-11-19 13:19:54.166092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.929 [2024-11-19 13:19:54.166098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.929 [2024-11-19 13:19:54.166104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.929 [2024-11-19 13:19:54.166119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.929 qpair failed and we were unable to recover it. 00:27:50.929 [2024-11-19 13:19:54.176054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.929 [2024-11-19 13:19:54.176113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.929 [2024-11-19 13:19:54.176126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.929 [2024-11-19 13:19:54.176133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.929 [2024-11-19 13:19:54.176139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.929 [2024-11-19 13:19:54.176153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.929 qpair failed and we were unable to recover it. 00:27:50.929 [2024-11-19 13:19:54.186100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.929 [2024-11-19 13:19:54.186161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.929 [2024-11-19 13:19:54.186174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.929 [2024-11-19 13:19:54.186181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.929 [2024-11-19 13:19:54.186186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.929 [2024-11-19 13:19:54.186201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.929 qpair failed and we were unable to recover it. 00:27:50.929 [2024-11-19 13:19:54.196165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.929 [2024-11-19 13:19:54.196223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.929 [2024-11-19 13:19:54.196236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.929 [2024-11-19 13:19:54.196243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.929 [2024-11-19 13:19:54.196249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.929 [2024-11-19 13:19:54.196264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.929 qpair failed and we were unable to recover it. 00:27:50.929 [2024-11-19 13:19:54.206138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.929 [2024-11-19 13:19:54.206193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.929 [2024-11-19 13:19:54.206206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.929 [2024-11-19 13:19:54.206213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.929 [2024-11-19 13:19:54.206219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.929 [2024-11-19 13:19:54.206233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.929 qpair failed and we were unable to recover it. 00:27:50.929 [2024-11-19 13:19:54.216165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.929 [2024-11-19 13:19:54.216221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.929 [2024-11-19 13:19:54.216235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.929 [2024-11-19 13:19:54.216241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.929 [2024-11-19 13:19:54.216247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.929 [2024-11-19 13:19:54.216261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.929 qpair failed and we were unable to recover it. 00:27:50.929 [2024-11-19 13:19:54.226193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.929 [2024-11-19 13:19:54.226241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.929 [2024-11-19 13:19:54.226255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.929 [2024-11-19 13:19:54.226262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.929 [2024-11-19 13:19:54.226268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.929 [2024-11-19 13:19:54.226282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.929 qpair failed and we were unable to recover it. 00:27:50.929 [2024-11-19 13:19:54.236231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.929 [2024-11-19 13:19:54.236323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.929 [2024-11-19 13:19:54.236340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.929 [2024-11-19 13:19:54.236347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.929 [2024-11-19 13:19:54.236353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.929 [2024-11-19 13:19:54.236368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.929 qpair failed and we were unable to recover it. 00:27:50.929 [2024-11-19 13:19:54.246255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.929 [2024-11-19 13:19:54.246312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.929 [2024-11-19 13:19:54.246326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.929 [2024-11-19 13:19:54.246332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.929 [2024-11-19 13:19:54.246338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.929 [2024-11-19 13:19:54.246353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.929 qpair failed and we were unable to recover it. 00:27:50.929 [2024-11-19 13:19:54.256282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.929 [2024-11-19 13:19:54.256337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.929 [2024-11-19 13:19:54.256351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.929 [2024-11-19 13:19:54.256357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.929 [2024-11-19 13:19:54.256363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.929 [2024-11-19 13:19:54.256378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.929 qpair failed and we were unable to recover it. 00:27:50.929 [2024-11-19 13:19:54.266301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.929 [2024-11-19 13:19:54.266354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.929 [2024-11-19 13:19:54.266367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.929 [2024-11-19 13:19:54.266374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.929 [2024-11-19 13:19:54.266380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.929 [2024-11-19 13:19:54.266394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.929 qpair failed and we were unable to recover it. 00:27:50.929 [2024-11-19 13:19:54.276336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.930 [2024-11-19 13:19:54.276394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.930 [2024-11-19 13:19:54.276409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.930 [2024-11-19 13:19:54.276419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.930 [2024-11-19 13:19:54.276425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.930 [2024-11-19 13:19:54.276440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.930 qpair failed and we were unable to recover it. 00:27:50.930 [2024-11-19 13:19:54.286371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.930 [2024-11-19 13:19:54.286426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.930 [2024-11-19 13:19:54.286440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.930 [2024-11-19 13:19:54.286446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.930 [2024-11-19 13:19:54.286452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.930 [2024-11-19 13:19:54.286466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.930 qpair failed and we were unable to recover it. 00:27:50.930 [2024-11-19 13:19:54.296387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.930 [2024-11-19 13:19:54.296444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.930 [2024-11-19 13:19:54.296457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.930 [2024-11-19 13:19:54.296464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.930 [2024-11-19 13:19:54.296470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:50.930 [2024-11-19 13:19:54.296484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.930 qpair failed and we were unable to recover it. 00:27:51.190 [2024-11-19 13:19:54.306425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.190 [2024-11-19 13:19:54.306480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.190 [2024-11-19 13:19:54.306494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.190 [2024-11-19 13:19:54.306501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.190 [2024-11-19 13:19:54.306507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.190 [2024-11-19 13:19:54.306523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.190 qpair failed and we were unable to recover it. 00:27:51.190 [2024-11-19 13:19:54.316458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.190 [2024-11-19 13:19:54.316513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.190 [2024-11-19 13:19:54.316527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.190 [2024-11-19 13:19:54.316534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.190 [2024-11-19 13:19:54.316540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.190 [2024-11-19 13:19:54.316558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.190 qpair failed and we were unable to recover it. 00:27:51.190 [2024-11-19 13:19:54.326498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.190 [2024-11-19 13:19:54.326585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.190 [2024-11-19 13:19:54.326598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.190 [2024-11-19 13:19:54.326604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.190 [2024-11-19 13:19:54.326610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.190 [2024-11-19 13:19:54.326624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.190 qpair failed and we were unable to recover it. 00:27:51.190 [2024-11-19 13:19:54.336520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.190 [2024-11-19 13:19:54.336577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.190 [2024-11-19 13:19:54.336592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.190 [2024-11-19 13:19:54.336599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.190 [2024-11-19 13:19:54.336605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.190 [2024-11-19 13:19:54.336620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.190 qpair failed and we were unable to recover it. 00:27:51.190 [2024-11-19 13:19:54.346566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.190 [2024-11-19 13:19:54.346652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.190 [2024-11-19 13:19:54.346665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.190 [2024-11-19 13:19:54.346672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.190 [2024-11-19 13:19:54.346678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.190 [2024-11-19 13:19:54.346692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.190 qpair failed and we were unable to recover it. 00:27:51.190 [2024-11-19 13:19:54.356590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.190 [2024-11-19 13:19:54.356644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.190 [2024-11-19 13:19:54.356658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.190 [2024-11-19 13:19:54.356665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.190 [2024-11-19 13:19:54.356672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.190 [2024-11-19 13:19:54.356687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.190 qpair failed and we were unable to recover it. 00:27:51.190 [2024-11-19 13:19:54.366618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.190 [2024-11-19 13:19:54.366682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.190 [2024-11-19 13:19:54.366696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.190 [2024-11-19 13:19:54.366703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.190 [2024-11-19 13:19:54.366709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.190 [2024-11-19 13:19:54.366723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.190 qpair failed and we were unable to recover it. 00:27:51.190 [2024-11-19 13:19:54.376617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.190 [2024-11-19 13:19:54.376674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.190 [2024-11-19 13:19:54.376688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.190 [2024-11-19 13:19:54.376694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.190 [2024-11-19 13:19:54.376700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.190 [2024-11-19 13:19:54.376715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.190 qpair failed and we were unable to recover it. 00:27:51.190 [2024-11-19 13:19:54.386612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.190 [2024-11-19 13:19:54.386669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.190 [2024-11-19 13:19:54.386688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.190 [2024-11-19 13:19:54.386695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.190 [2024-11-19 13:19:54.386701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.190 [2024-11-19 13:19:54.386720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.190 qpair failed and we were unable to recover it. 00:27:51.190 [2024-11-19 13:19:54.396688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.190 [2024-11-19 13:19:54.396744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.190 [2024-11-19 13:19:54.396759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.190 [2024-11-19 13:19:54.396766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.190 [2024-11-19 13:19:54.396772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.190 [2024-11-19 13:19:54.396787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.190 qpair failed and we were unable to recover it. 00:27:51.190 [2024-11-19 13:19:54.406728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.190 [2024-11-19 13:19:54.406820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.190 [2024-11-19 13:19:54.406834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.190 [2024-11-19 13:19:54.406844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.190 [2024-11-19 13:19:54.406850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.190 [2024-11-19 13:19:54.406865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.190 qpair failed and we were unable to recover it. 00:27:51.190 [2024-11-19 13:19:54.416760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.190 [2024-11-19 13:19:54.416816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.190 [2024-11-19 13:19:54.416830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.190 [2024-11-19 13:19:54.416837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.190 [2024-11-19 13:19:54.416843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.190 [2024-11-19 13:19:54.416857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.190 qpair failed and we were unable to recover it. 00:27:51.190 [2024-11-19 13:19:54.426778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.190 [2024-11-19 13:19:54.426835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.190 [2024-11-19 13:19:54.426849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.190 [2024-11-19 13:19:54.426856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.190 [2024-11-19 13:19:54.426862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.190 [2024-11-19 13:19:54.426877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.190 qpair failed and we were unable to recover it. 00:27:51.190 [2024-11-19 13:19:54.436861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.190 [2024-11-19 13:19:54.436918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.190 [2024-11-19 13:19:54.436932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.190 [2024-11-19 13:19:54.436938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.190 [2024-11-19 13:19:54.436944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.190 [2024-11-19 13:19:54.436966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.190 qpair failed and we were unable to recover it. 00:27:51.190 [2024-11-19 13:19:54.446849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.190 [2024-11-19 13:19:54.446910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.190 [2024-11-19 13:19:54.446923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.190 [2024-11-19 13:19:54.446930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.190 [2024-11-19 13:19:54.446936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.190 [2024-11-19 13:19:54.446959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.190 qpair failed and we were unable to recover it. 00:27:51.190 [2024-11-19 13:19:54.456867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.190 [2024-11-19 13:19:54.456921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.190 [2024-11-19 13:19:54.456936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.190 [2024-11-19 13:19:54.456942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.190 [2024-11-19 13:19:54.456953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.190 [2024-11-19 13:19:54.456969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.190 qpair failed and we were unable to recover it. 00:27:51.190 [2024-11-19 13:19:54.466829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.190 [2024-11-19 13:19:54.466882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.190 [2024-11-19 13:19:54.466896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.190 [2024-11-19 13:19:54.466903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.190 [2024-11-19 13:19:54.466909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.190 [2024-11-19 13:19:54.466924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.190 qpair failed and we were unable to recover it. 00:27:51.190 [2024-11-19 13:19:54.476919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.191 [2024-11-19 13:19:54.476975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.191 [2024-11-19 13:19:54.476989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.191 [2024-11-19 13:19:54.476996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.191 [2024-11-19 13:19:54.477001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.191 [2024-11-19 13:19:54.477016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.191 qpair failed and we were unable to recover it. 00:27:51.191 [2024-11-19 13:19:54.486968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.191 [2024-11-19 13:19:54.487023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.191 [2024-11-19 13:19:54.487037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.191 [2024-11-19 13:19:54.487044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.191 [2024-11-19 13:19:54.487050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.191 [2024-11-19 13:19:54.487065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.191 qpair failed and we were unable to recover it. 00:27:51.191 [2024-11-19 13:19:54.496988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.191 [2024-11-19 13:19:54.497046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.191 [2024-11-19 13:19:54.497060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.191 [2024-11-19 13:19:54.497067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.191 [2024-11-19 13:19:54.497073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.191 [2024-11-19 13:19:54.497087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.191 qpair failed and we were unable to recover it. 00:27:51.191 [2024-11-19 13:19:54.507047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.191 [2024-11-19 13:19:54.507103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.191 [2024-11-19 13:19:54.507117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.191 [2024-11-19 13:19:54.507124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.191 [2024-11-19 13:19:54.507130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.191 [2024-11-19 13:19:54.507145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.191 qpair failed and we were unable to recover it. 00:27:51.191 [2024-11-19 13:19:54.517044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.191 [2024-11-19 13:19:54.517095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.191 [2024-11-19 13:19:54.517109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.191 [2024-11-19 13:19:54.517116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.191 [2024-11-19 13:19:54.517122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.191 [2024-11-19 13:19:54.517136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.191 qpair failed and we were unable to recover it. 00:27:51.191 [2024-11-19 13:19:54.527094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.191 [2024-11-19 13:19:54.527150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.191 [2024-11-19 13:19:54.527162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.191 [2024-11-19 13:19:54.527169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.191 [2024-11-19 13:19:54.527175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.191 [2024-11-19 13:19:54.527190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.191 qpair failed and we were unable to recover it. 00:27:51.191 [2024-11-19 13:19:54.537079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.191 [2024-11-19 13:19:54.537167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.191 [2024-11-19 13:19:54.537184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.191 [2024-11-19 13:19:54.537190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.191 [2024-11-19 13:19:54.537196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.191 [2024-11-19 13:19:54.537211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.191 qpair failed and we were unable to recover it. 00:27:51.191 [2024-11-19 13:19:54.547138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.191 [2024-11-19 13:19:54.547192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.191 [2024-11-19 13:19:54.547206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.191 [2024-11-19 13:19:54.547213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.191 [2024-11-19 13:19:54.547219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.191 [2024-11-19 13:19:54.547233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.191 qpair failed and we were unable to recover it. 00:27:51.191 [2024-11-19 13:19:54.557101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.191 [2024-11-19 13:19:54.557154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.191 [2024-11-19 13:19:54.557167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.191 [2024-11-19 13:19:54.557174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.191 [2024-11-19 13:19:54.557180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.191 [2024-11-19 13:19:54.557195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.191 qpair failed and we were unable to recover it. 00:27:51.452 [2024-11-19 13:19:54.567134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.452 [2024-11-19 13:19:54.567193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.452 [2024-11-19 13:19:54.567207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.452 [2024-11-19 13:19:54.567214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.452 [2024-11-19 13:19:54.567220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.452 [2024-11-19 13:19:54.567235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.452 qpair failed and we were unable to recover it. 00:27:51.452 [2024-11-19 13:19:54.577211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.452 [2024-11-19 13:19:54.577269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.452 [2024-11-19 13:19:54.577282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.452 [2024-11-19 13:19:54.577289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.452 [2024-11-19 13:19:54.577299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.452 [2024-11-19 13:19:54.577313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.452 qpair failed and we were unable to recover it. 00:27:51.452 [2024-11-19 13:19:54.587300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.452 [2024-11-19 13:19:54.587399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.452 [2024-11-19 13:19:54.587412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.452 [2024-11-19 13:19:54.587419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.452 [2024-11-19 13:19:54.587424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.452 [2024-11-19 13:19:54.587440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.452 qpair failed and we were unable to recover it. 00:27:51.452 [2024-11-19 13:19:54.597240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.452 [2024-11-19 13:19:54.597303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.452 [2024-11-19 13:19:54.597316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.452 [2024-11-19 13:19:54.597323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.452 [2024-11-19 13:19:54.597329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.452 [2024-11-19 13:19:54.597344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.452 qpair failed and we were unable to recover it. 00:27:51.452 [2024-11-19 13:19:54.607274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.452 [2024-11-19 13:19:54.607329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.452 [2024-11-19 13:19:54.607342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.452 [2024-11-19 13:19:54.607349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.452 [2024-11-19 13:19:54.607355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.452 [2024-11-19 13:19:54.607370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.452 qpair failed and we were unable to recover it. 00:27:51.452 [2024-11-19 13:19:54.617303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.452 [2024-11-19 13:19:54.617354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.452 [2024-11-19 13:19:54.617368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.452 [2024-11-19 13:19:54.617375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.452 [2024-11-19 13:19:54.617381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.452 [2024-11-19 13:19:54.617395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.452 qpair failed and we were unable to recover it. 00:27:51.452 [2024-11-19 13:19:54.627346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.452 [2024-11-19 13:19:54.627405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.452 [2024-11-19 13:19:54.627419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.452 [2024-11-19 13:19:54.627426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.452 [2024-11-19 13:19:54.627432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.452 [2024-11-19 13:19:54.627446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.452 qpair failed and we were unable to recover it. 00:27:51.452 [2024-11-19 13:19:54.637359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.452 [2024-11-19 13:19:54.637452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.452 [2024-11-19 13:19:54.637466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.452 [2024-11-19 13:19:54.637473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.452 [2024-11-19 13:19:54.637479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.452 [2024-11-19 13:19:54.637494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.452 qpair failed and we were unable to recover it. 00:27:51.452 [2024-11-19 13:19:54.647418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.452 [2024-11-19 13:19:54.647476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.452 [2024-11-19 13:19:54.647489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.452 [2024-11-19 13:19:54.647497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.452 [2024-11-19 13:19:54.647502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.452 [2024-11-19 13:19:54.647517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.452 qpair failed and we were unable to recover it. 00:27:51.452 [2024-11-19 13:19:54.657387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.452 [2024-11-19 13:19:54.657479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.452 [2024-11-19 13:19:54.657493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.452 [2024-11-19 13:19:54.657499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.452 [2024-11-19 13:19:54.657506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.452 [2024-11-19 13:19:54.657521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.452 qpair failed and we were unable to recover it. 00:27:51.452 [2024-11-19 13:19:54.667422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.452 [2024-11-19 13:19:54.667495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.452 [2024-11-19 13:19:54.667512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.452 [2024-11-19 13:19:54.667519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.452 [2024-11-19 13:19:54.667524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.452 [2024-11-19 13:19:54.667540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.452 qpair failed and we were unable to recover it. 00:27:51.452 [2024-11-19 13:19:54.677484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.452 [2024-11-19 13:19:54.677560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.452 [2024-11-19 13:19:54.677574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.452 [2024-11-19 13:19:54.677580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.452 [2024-11-19 13:19:54.677586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.452 [2024-11-19 13:19:54.677601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.452 qpair failed and we were unable to recover it. 00:27:51.452 [2024-11-19 13:19:54.687463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.453 [2024-11-19 13:19:54.687550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.453 [2024-11-19 13:19:54.687564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.453 [2024-11-19 13:19:54.687571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.453 [2024-11-19 13:19:54.687577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.453 [2024-11-19 13:19:54.687591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.453 qpair failed and we were unable to recover it. 00:27:51.453 [2024-11-19 13:19:54.697551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.453 [2024-11-19 13:19:54.697610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.453 [2024-11-19 13:19:54.697623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.453 [2024-11-19 13:19:54.697630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.453 [2024-11-19 13:19:54.697636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.453 [2024-11-19 13:19:54.697650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.453 qpair failed and we were unable to recover it. 00:27:51.453 [2024-11-19 13:19:54.707565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.453 [2024-11-19 13:19:54.707620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.453 [2024-11-19 13:19:54.707633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.453 [2024-11-19 13:19:54.707640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.453 [2024-11-19 13:19:54.707650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.453 [2024-11-19 13:19:54.707664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.453 qpair failed and we were unable to recover it. 00:27:51.453 [2024-11-19 13:19:54.717573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.453 [2024-11-19 13:19:54.717629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.453 [2024-11-19 13:19:54.717643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.453 [2024-11-19 13:19:54.717649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.453 [2024-11-19 13:19:54.717655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.453 [2024-11-19 13:19:54.717669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.453 qpair failed and we were unable to recover it. 00:27:51.453 [2024-11-19 13:19:54.727682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.453 [2024-11-19 13:19:54.727741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.453 [2024-11-19 13:19:54.727754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.453 [2024-11-19 13:19:54.727761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.453 [2024-11-19 13:19:54.727767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.453 [2024-11-19 13:19:54.727782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.453 qpair failed and we were unable to recover it. 00:27:51.453 [2024-11-19 13:19:54.737604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.453 [2024-11-19 13:19:54.737663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.453 [2024-11-19 13:19:54.737677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.453 [2024-11-19 13:19:54.737684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.453 [2024-11-19 13:19:54.737689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.453 [2024-11-19 13:19:54.737705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.453 qpair failed and we were unable to recover it. 00:27:51.453 [2024-11-19 13:19:54.747695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.453 [2024-11-19 13:19:54.747748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.453 [2024-11-19 13:19:54.747761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.453 [2024-11-19 13:19:54.747768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.453 [2024-11-19 13:19:54.747774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.453 [2024-11-19 13:19:54.747788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.453 qpair failed and we were unable to recover it. 00:27:51.453 [2024-11-19 13:19:54.757720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.453 [2024-11-19 13:19:54.757770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.453 [2024-11-19 13:19:54.757784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.453 [2024-11-19 13:19:54.757791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.453 [2024-11-19 13:19:54.757797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.453 [2024-11-19 13:19:54.757812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.453 qpair failed and we were unable to recover it. 00:27:51.453 [2024-11-19 13:19:54.767846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.453 [2024-11-19 13:19:54.767904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.453 [2024-11-19 13:19:54.767917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.453 [2024-11-19 13:19:54.767924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.453 [2024-11-19 13:19:54.767930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.453 [2024-11-19 13:19:54.767945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.453 qpair failed and we were unable to recover it. 00:27:51.453 [2024-11-19 13:19:54.777755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.453 [2024-11-19 13:19:54.777812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.453 [2024-11-19 13:19:54.777826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.453 [2024-11-19 13:19:54.777832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.453 [2024-11-19 13:19:54.777838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.453 [2024-11-19 13:19:54.777853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.453 qpair failed and we were unable to recover it. 00:27:51.453 [2024-11-19 13:19:54.787804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.453 [2024-11-19 13:19:54.787858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.453 [2024-11-19 13:19:54.787872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.453 [2024-11-19 13:19:54.787879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.453 [2024-11-19 13:19:54.787885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.453 [2024-11-19 13:19:54.787900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.453 qpair failed and we were unable to recover it. 00:27:51.453 [2024-11-19 13:19:54.797830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.453 [2024-11-19 13:19:54.797884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.453 [2024-11-19 13:19:54.797901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.453 [2024-11-19 13:19:54.797908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.453 [2024-11-19 13:19:54.797914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.453 [2024-11-19 13:19:54.797929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.453 qpair failed and we were unable to recover it. 00:27:51.453 [2024-11-19 13:19:54.807860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.453 [2024-11-19 13:19:54.807918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.453 [2024-11-19 13:19:54.807931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.453 [2024-11-19 13:19:54.807937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.453 [2024-11-19 13:19:54.807943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.453 [2024-11-19 13:19:54.807963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.453 qpair failed and we were unable to recover it. 00:27:51.453 [2024-11-19 13:19:54.817892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.454 [2024-11-19 13:19:54.817953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.454 [2024-11-19 13:19:54.817966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.454 [2024-11-19 13:19:54.817973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.454 [2024-11-19 13:19:54.817979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.454 [2024-11-19 13:19:54.817993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.454 qpair failed and we were unable to recover it. 00:27:51.714 [2024-11-19 13:19:54.827909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.714 [2024-11-19 13:19:54.827966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.714 [2024-11-19 13:19:54.827981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.714 [2024-11-19 13:19:54.827987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.714 [2024-11-19 13:19:54.827993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.714 [2024-11-19 13:19:54.828009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.714 qpair failed and we were unable to recover it. 00:27:51.714 [2024-11-19 13:19:54.837952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.714 [2024-11-19 13:19:54.838003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.714 [2024-11-19 13:19:54.838017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.714 [2024-11-19 13:19:54.838027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.714 [2024-11-19 13:19:54.838033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.714 [2024-11-19 13:19:54.838047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.714 qpair failed and we were unable to recover it. 00:27:51.714 [2024-11-19 13:19:54.848027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.714 [2024-11-19 13:19:54.848086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.714 [2024-11-19 13:19:54.848100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.714 [2024-11-19 13:19:54.848107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.714 [2024-11-19 13:19:54.848113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.714 [2024-11-19 13:19:54.848127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.714 qpair failed and we were unable to recover it. 00:27:51.714 [2024-11-19 13:19:54.858052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.714 [2024-11-19 13:19:54.858135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.714 [2024-11-19 13:19:54.858148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.714 [2024-11-19 13:19:54.858155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.714 [2024-11-19 13:19:54.858160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.714 [2024-11-19 13:19:54.858175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.714 qpair failed and we were unable to recover it. 00:27:51.714 [2024-11-19 13:19:54.868022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.714 [2024-11-19 13:19:54.868077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.714 [2024-11-19 13:19:54.868090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.714 [2024-11-19 13:19:54.868097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.714 [2024-11-19 13:19:54.868103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.714 [2024-11-19 13:19:54.868117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.714 qpair failed and we were unable to recover it. 00:27:51.714 [2024-11-19 13:19:54.878063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.714 [2024-11-19 13:19:54.878119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.714 [2024-11-19 13:19:54.878133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.714 [2024-11-19 13:19:54.878140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.715 [2024-11-19 13:19:54.878146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.715 [2024-11-19 13:19:54.878164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.715 qpair failed and we were unable to recover it. 00:27:51.715 [2024-11-19 13:19:54.888103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.715 [2024-11-19 13:19:54.888155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.715 [2024-11-19 13:19:54.888169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.715 [2024-11-19 13:19:54.888175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.715 [2024-11-19 13:19:54.888181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.715 [2024-11-19 13:19:54.888196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.715 qpair failed and we were unable to recover it. 00:27:51.715 [2024-11-19 13:19:54.898116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.715 [2024-11-19 13:19:54.898173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.715 [2024-11-19 13:19:54.898186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.715 [2024-11-19 13:19:54.898193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.715 [2024-11-19 13:19:54.898199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.715 [2024-11-19 13:19:54.898213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.715 qpair failed and we were unable to recover it. 00:27:51.715 [2024-11-19 13:19:54.908171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.715 [2024-11-19 13:19:54.908239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.715 [2024-11-19 13:19:54.908253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.715 [2024-11-19 13:19:54.908259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.715 [2024-11-19 13:19:54.908265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.715 [2024-11-19 13:19:54.908281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.715 qpair failed and we were unable to recover it. 00:27:51.715 [2024-11-19 13:19:54.918237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.715 [2024-11-19 13:19:54.918335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.715 [2024-11-19 13:19:54.918349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.715 [2024-11-19 13:19:54.918357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.715 [2024-11-19 13:19:54.918363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.715 [2024-11-19 13:19:54.918378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.715 qpair failed and we were unable to recover it. 00:27:51.715 [2024-11-19 13:19:54.928154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.715 [2024-11-19 13:19:54.928210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.715 [2024-11-19 13:19:54.928225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.715 [2024-11-19 13:19:54.928232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.715 [2024-11-19 13:19:54.928238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.715 [2024-11-19 13:19:54.928253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.715 qpair failed and we were unable to recover it. 00:27:51.715 [2024-11-19 13:19:54.938243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.715 [2024-11-19 13:19:54.938300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.715 [2024-11-19 13:19:54.938314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.715 [2024-11-19 13:19:54.938321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.715 [2024-11-19 13:19:54.938328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.715 [2024-11-19 13:19:54.938342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.715 qpair failed and we were unable to recover it. 00:27:51.715 [2024-11-19 13:19:54.948284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.715 [2024-11-19 13:19:54.948340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.715 [2024-11-19 13:19:54.948354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.715 [2024-11-19 13:19:54.948361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.715 [2024-11-19 13:19:54.948367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.715 [2024-11-19 13:19:54.948381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.715 qpair failed and we were unable to recover it. 00:27:51.715 [2024-11-19 13:19:54.958294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.715 [2024-11-19 13:19:54.958350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.715 [2024-11-19 13:19:54.958364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.715 [2024-11-19 13:19:54.958371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.715 [2024-11-19 13:19:54.958377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.715 [2024-11-19 13:19:54.958391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.715 qpair failed and we were unable to recover it. 00:27:51.715 [2024-11-19 13:19:54.968331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.715 [2024-11-19 13:19:54.968389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.715 [2024-11-19 13:19:54.968403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.715 [2024-11-19 13:19:54.968412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.715 [2024-11-19 13:19:54.968419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.715 [2024-11-19 13:19:54.968433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.715 qpair failed and we were unable to recover it. 00:27:51.715 [2024-11-19 13:19:54.978350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.715 [2024-11-19 13:19:54.978403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.715 [2024-11-19 13:19:54.978417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.715 [2024-11-19 13:19:54.978424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.715 [2024-11-19 13:19:54.978430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.715 [2024-11-19 13:19:54.978444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.715 qpair failed and we were unable to recover it. 00:27:51.715 [2024-11-19 13:19:54.988375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.715 [2024-11-19 13:19:54.988429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.715 [2024-11-19 13:19:54.988442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.715 [2024-11-19 13:19:54.988449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.715 [2024-11-19 13:19:54.988455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.715 [2024-11-19 13:19:54.988469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.715 qpair failed and we were unable to recover it. 00:27:51.715 [2024-11-19 13:19:54.998399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.715 [2024-11-19 13:19:54.998479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.715 [2024-11-19 13:19:54.998492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.715 [2024-11-19 13:19:54.998499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.715 [2024-11-19 13:19:54.998505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.715 [2024-11-19 13:19:54.998519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.715 qpair failed and we were unable to recover it. 00:27:51.715 [2024-11-19 13:19:55.008442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.715 [2024-11-19 13:19:55.008497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.715 [2024-11-19 13:19:55.008511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.715 [2024-11-19 13:19:55.008517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.716 [2024-11-19 13:19:55.008523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.716 [2024-11-19 13:19:55.008541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.716 qpair failed and we were unable to recover it. 00:27:51.716 [2024-11-19 13:19:55.018473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.716 [2024-11-19 13:19:55.018527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.716 [2024-11-19 13:19:55.018540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.716 [2024-11-19 13:19:55.018547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.716 [2024-11-19 13:19:55.018553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.716 [2024-11-19 13:19:55.018568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.716 qpair failed and we were unable to recover it. 00:27:51.716 [2024-11-19 13:19:55.028498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.716 [2024-11-19 13:19:55.028548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.716 [2024-11-19 13:19:55.028562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.716 [2024-11-19 13:19:55.028568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.716 [2024-11-19 13:19:55.028574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.716 [2024-11-19 13:19:55.028589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.716 qpair failed and we were unable to recover it. 00:27:51.716 [2024-11-19 13:19:55.038517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.716 [2024-11-19 13:19:55.038566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.716 [2024-11-19 13:19:55.038580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.716 [2024-11-19 13:19:55.038586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.716 [2024-11-19 13:19:55.038592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.716 [2024-11-19 13:19:55.038607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.716 qpair failed and we were unable to recover it. 00:27:51.716 [2024-11-19 13:19:55.048561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.716 [2024-11-19 13:19:55.048616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.716 [2024-11-19 13:19:55.048629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.716 [2024-11-19 13:19:55.048636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.716 [2024-11-19 13:19:55.048642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.716 [2024-11-19 13:19:55.048657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.716 qpair failed and we were unable to recover it. 00:27:51.716 [2024-11-19 13:19:55.058578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.716 [2024-11-19 13:19:55.058633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.716 [2024-11-19 13:19:55.058646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.716 [2024-11-19 13:19:55.058653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.716 [2024-11-19 13:19:55.058659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.716 [2024-11-19 13:19:55.058674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.716 qpair failed and we were unable to recover it. 00:27:51.716 [2024-11-19 13:19:55.068615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.716 [2024-11-19 13:19:55.068670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.716 [2024-11-19 13:19:55.068684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.716 [2024-11-19 13:19:55.068690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.716 [2024-11-19 13:19:55.068696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.716 [2024-11-19 13:19:55.068711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.716 qpair failed and we were unable to recover it. 00:27:51.716 [2024-11-19 13:19:55.078642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.716 [2024-11-19 13:19:55.078690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.716 [2024-11-19 13:19:55.078703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.716 [2024-11-19 13:19:55.078709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.716 [2024-11-19 13:19:55.078715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.716 [2024-11-19 13:19:55.078730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.716 qpair failed and we were unable to recover it. 00:27:51.977 [2024-11-19 13:19:55.088650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.977 [2024-11-19 13:19:55.088734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.977 [2024-11-19 13:19:55.088747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.977 [2024-11-19 13:19:55.088754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.977 [2024-11-19 13:19:55.088760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.977 [2024-11-19 13:19:55.088774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.977 qpair failed and we were unable to recover it. 00:27:51.977 [2024-11-19 13:19:55.098697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.977 [2024-11-19 13:19:55.098752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.977 [2024-11-19 13:19:55.098769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.977 [2024-11-19 13:19:55.098776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.977 [2024-11-19 13:19:55.098782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.977 [2024-11-19 13:19:55.098797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.977 qpair failed and we were unable to recover it. 00:27:51.977 [2024-11-19 13:19:55.108730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.977 [2024-11-19 13:19:55.108820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.977 [2024-11-19 13:19:55.108833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.977 [2024-11-19 13:19:55.108840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.977 [2024-11-19 13:19:55.108846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.977 [2024-11-19 13:19:55.108860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.977 qpair failed and we were unable to recover it. 00:27:51.977 [2024-11-19 13:19:55.118751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.977 [2024-11-19 13:19:55.118798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.977 [2024-11-19 13:19:55.118812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.977 [2024-11-19 13:19:55.118818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.977 [2024-11-19 13:19:55.118825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.977 [2024-11-19 13:19:55.118839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.977 qpair failed and we were unable to recover it. 00:27:51.977 [2024-11-19 13:19:55.128830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.977 [2024-11-19 13:19:55.128927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.977 [2024-11-19 13:19:55.128940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.977 [2024-11-19 13:19:55.128950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.977 [2024-11-19 13:19:55.128956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.977 [2024-11-19 13:19:55.128971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.977 qpair failed and we were unable to recover it. 00:27:51.977 [2024-11-19 13:19:55.138811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.977 [2024-11-19 13:19:55.138866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.977 [2024-11-19 13:19:55.138879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.977 [2024-11-19 13:19:55.138886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.977 [2024-11-19 13:19:55.138895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.977 [2024-11-19 13:19:55.138910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.977 qpair failed and we were unable to recover it. 00:27:51.977 [2024-11-19 13:19:55.148901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.977 [2024-11-19 13:19:55.148970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.977 [2024-11-19 13:19:55.148983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.977 [2024-11-19 13:19:55.148990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.977 [2024-11-19 13:19:55.148996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.977 [2024-11-19 13:19:55.149011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.977 qpair failed and we were unable to recover it. 00:27:51.977 [2024-11-19 13:19:55.158858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.977 [2024-11-19 13:19:55.158913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.977 [2024-11-19 13:19:55.158926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.977 [2024-11-19 13:19:55.158933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.977 [2024-11-19 13:19:55.158939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.977 [2024-11-19 13:19:55.158957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.977 qpair failed and we were unable to recover it. 00:27:51.977 [2024-11-19 13:19:55.168913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.977 [2024-11-19 13:19:55.168976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.977 [2024-11-19 13:19:55.168990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.977 [2024-11-19 13:19:55.168997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.977 [2024-11-19 13:19:55.169003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.977 [2024-11-19 13:19:55.169017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.977 qpair failed and we were unable to recover it. 00:27:51.977 [2024-11-19 13:19:55.178859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.977 [2024-11-19 13:19:55.178914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.977 [2024-11-19 13:19:55.178928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.977 [2024-11-19 13:19:55.178935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.977 [2024-11-19 13:19:55.178941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.977 [2024-11-19 13:19:55.178959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.977 qpair failed and we were unable to recover it. 00:27:51.977 [2024-11-19 13:19:55.188975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.977 [2024-11-19 13:19:55.189033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.977 [2024-11-19 13:19:55.189046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.977 [2024-11-19 13:19:55.189053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.977 [2024-11-19 13:19:55.189059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.977 [2024-11-19 13:19:55.189073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.977 qpair failed and we were unable to recover it. 00:27:51.977 [2024-11-19 13:19:55.198996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.977 [2024-11-19 13:19:55.199051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.977 [2024-11-19 13:19:55.199065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.977 [2024-11-19 13:19:55.199072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.977 [2024-11-19 13:19:55.199077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.977 [2024-11-19 13:19:55.199092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.977 qpair failed and we were unable to recover it. 00:27:51.978 [2024-11-19 13:19:55.209027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.978 [2024-11-19 13:19:55.209083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.978 [2024-11-19 13:19:55.209097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.978 [2024-11-19 13:19:55.209105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.978 [2024-11-19 13:19:55.209113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.978 [2024-11-19 13:19:55.209129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.978 qpair failed and we were unable to recover it. 00:27:51.978 [2024-11-19 13:19:55.219049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.978 [2024-11-19 13:19:55.219101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.978 [2024-11-19 13:19:55.219115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.978 [2024-11-19 13:19:55.219121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.978 [2024-11-19 13:19:55.219127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.978 [2024-11-19 13:19:55.219142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.978 qpair failed and we were unable to recover it. 00:27:51.978 [2024-11-19 13:19:55.229066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.978 [2024-11-19 13:19:55.229119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.978 [2024-11-19 13:19:55.229136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.978 [2024-11-19 13:19:55.229142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.978 [2024-11-19 13:19:55.229148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.978 [2024-11-19 13:19:55.229163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.978 qpair failed and we were unable to recover it. 00:27:51.978 [2024-11-19 13:19:55.239098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.978 [2024-11-19 13:19:55.239148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.978 [2024-11-19 13:19:55.239162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.978 [2024-11-19 13:19:55.239168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.978 [2024-11-19 13:19:55.239174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.978 [2024-11-19 13:19:55.239189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.978 qpair failed and we were unable to recover it. 00:27:51.978 [2024-11-19 13:19:55.249151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.978 [2024-11-19 13:19:55.249233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.978 [2024-11-19 13:19:55.249247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.978 [2024-11-19 13:19:55.249254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.978 [2024-11-19 13:19:55.249259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.978 [2024-11-19 13:19:55.249274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.978 qpair failed and we were unable to recover it. 00:27:51.978 [2024-11-19 13:19:55.259130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.978 [2024-11-19 13:19:55.259193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.978 [2024-11-19 13:19:55.259207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.978 [2024-11-19 13:19:55.259214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.978 [2024-11-19 13:19:55.259219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.978 [2024-11-19 13:19:55.259235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.978 qpair failed and we were unable to recover it. 00:27:51.978 [2024-11-19 13:19:55.269185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.978 [2024-11-19 13:19:55.269238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.978 [2024-11-19 13:19:55.269252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.978 [2024-11-19 13:19:55.269260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.978 [2024-11-19 13:19:55.269268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.978 [2024-11-19 13:19:55.269283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.978 qpair failed and we were unable to recover it. 00:27:51.978 [2024-11-19 13:19:55.279217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.978 [2024-11-19 13:19:55.279273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.978 [2024-11-19 13:19:55.279288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.978 [2024-11-19 13:19:55.279295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.978 [2024-11-19 13:19:55.279301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.978 [2024-11-19 13:19:55.279316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.978 qpair failed and we were unable to recover it. 00:27:51.978 [2024-11-19 13:19:55.289256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.978 [2024-11-19 13:19:55.289310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.978 [2024-11-19 13:19:55.289323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.978 [2024-11-19 13:19:55.289329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.978 [2024-11-19 13:19:55.289335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.978 [2024-11-19 13:19:55.289350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.978 qpair failed and we were unable to recover it. 00:27:51.978 [2024-11-19 13:19:55.299276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.978 [2024-11-19 13:19:55.299326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.978 [2024-11-19 13:19:55.299339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.978 [2024-11-19 13:19:55.299346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.978 [2024-11-19 13:19:55.299352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.978 [2024-11-19 13:19:55.299366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.978 qpair failed and we were unable to recover it. 00:27:51.978 [2024-11-19 13:19:55.309275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.978 [2024-11-19 13:19:55.309329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.978 [2024-11-19 13:19:55.309342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.978 [2024-11-19 13:19:55.309349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.978 [2024-11-19 13:19:55.309355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.978 [2024-11-19 13:19:55.309369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.978 qpair failed and we were unable to recover it. 00:27:51.978 [2024-11-19 13:19:55.319325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.978 [2024-11-19 13:19:55.319401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.978 [2024-11-19 13:19:55.319414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.978 [2024-11-19 13:19:55.319420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.978 [2024-11-19 13:19:55.319426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.978 [2024-11-19 13:19:55.319440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.978 qpair failed and we were unable to recover it. 00:27:51.978 [2024-11-19 13:19:55.329421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.978 [2024-11-19 13:19:55.329490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.978 [2024-11-19 13:19:55.329503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.978 [2024-11-19 13:19:55.329509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.978 [2024-11-19 13:19:55.329515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.978 [2024-11-19 13:19:55.329531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.978 qpair failed and we were unable to recover it. 00:27:51.979 [2024-11-19 13:19:55.339385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.979 [2024-11-19 13:19:55.339440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.979 [2024-11-19 13:19:55.339454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.979 [2024-11-19 13:19:55.339461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.979 [2024-11-19 13:19:55.339467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.979 [2024-11-19 13:19:55.339482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.979 qpair failed and we were unable to recover it. 00:27:51.979 [2024-11-19 13:19:55.349340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:51.979 [2024-11-19 13:19:55.349396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:51.979 [2024-11-19 13:19:55.349410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:51.979 [2024-11-19 13:19:55.349417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:51.979 [2024-11-19 13:19:55.349422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:51.979 [2024-11-19 13:19:55.349437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:51.979 qpair failed and we were unable to recover it. 00:27:52.240 [2024-11-19 13:19:55.359438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.240 [2024-11-19 13:19:55.359491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.240 [2024-11-19 13:19:55.359507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.240 [2024-11-19 13:19:55.359514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.240 [2024-11-19 13:19:55.359520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.240 [2024-11-19 13:19:55.359535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.240 qpair failed and we were unable to recover it. 00:27:52.240 [2024-11-19 13:19:55.369484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.240 [2024-11-19 13:19:55.369544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.240 [2024-11-19 13:19:55.369557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.240 [2024-11-19 13:19:55.369563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.240 [2024-11-19 13:19:55.369569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.240 [2024-11-19 13:19:55.369583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.240 qpair failed and we were unable to recover it. 00:27:52.240 [2024-11-19 13:19:55.379506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.240 [2024-11-19 13:19:55.379558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.240 [2024-11-19 13:19:55.379571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.240 [2024-11-19 13:19:55.379578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.240 [2024-11-19 13:19:55.379584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.240 [2024-11-19 13:19:55.379598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.240 qpair failed and we were unable to recover it. 00:27:52.240 [2024-11-19 13:19:55.389528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.240 [2024-11-19 13:19:55.389582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.240 [2024-11-19 13:19:55.389595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.240 [2024-11-19 13:19:55.389601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.240 [2024-11-19 13:19:55.389607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.240 [2024-11-19 13:19:55.389621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.240 qpair failed and we were unable to recover it. 00:27:52.240 [2024-11-19 13:19:55.399555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.240 [2024-11-19 13:19:55.399608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.240 [2024-11-19 13:19:55.399622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.240 [2024-11-19 13:19:55.399631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.240 [2024-11-19 13:19:55.399637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.240 [2024-11-19 13:19:55.399652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.240 qpair failed and we were unable to recover it. 00:27:52.240 [2024-11-19 13:19:55.409617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.240 [2024-11-19 13:19:55.409703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.240 [2024-11-19 13:19:55.409716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.240 [2024-11-19 13:19:55.409723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.240 [2024-11-19 13:19:55.409729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.240 [2024-11-19 13:19:55.409743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.240 qpair failed and we were unable to recover it. 00:27:52.240 [2024-11-19 13:19:55.419619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.240 [2024-11-19 13:19:55.419691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.240 [2024-11-19 13:19:55.419704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.240 [2024-11-19 13:19:55.419711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.240 [2024-11-19 13:19:55.419717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.240 [2024-11-19 13:19:55.419732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.240 qpair failed and we were unable to recover it. 00:27:52.240 [2024-11-19 13:19:55.429653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.240 [2024-11-19 13:19:55.429710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.240 [2024-11-19 13:19:55.429723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.240 [2024-11-19 13:19:55.429730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.240 [2024-11-19 13:19:55.429736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.240 [2024-11-19 13:19:55.429751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.240 qpair failed and we were unable to recover it. 00:27:52.240 [2024-11-19 13:19:55.439619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.240 [2024-11-19 13:19:55.439675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.240 [2024-11-19 13:19:55.439688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.240 [2024-11-19 13:19:55.439695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.240 [2024-11-19 13:19:55.439701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.240 [2024-11-19 13:19:55.439722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.240 qpair failed and we were unable to recover it. 00:27:52.240 [2024-11-19 13:19:55.449711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.240 [2024-11-19 13:19:55.449763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.240 [2024-11-19 13:19:55.449777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.240 [2024-11-19 13:19:55.449783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.240 [2024-11-19 13:19:55.449789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.240 [2024-11-19 13:19:55.449803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.240 qpair failed and we were unable to recover it. 00:27:52.240 [2024-11-19 13:19:55.459705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.240 [2024-11-19 13:19:55.459801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.240 [2024-11-19 13:19:55.459814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.240 [2024-11-19 13:19:55.459821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.240 [2024-11-19 13:19:55.459827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.240 [2024-11-19 13:19:55.459841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.240 qpair failed and we were unable to recover it. 00:27:52.240 [2024-11-19 13:19:55.469714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.240 [2024-11-19 13:19:55.469809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.240 [2024-11-19 13:19:55.469822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.240 [2024-11-19 13:19:55.469829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.240 [2024-11-19 13:19:55.469835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.240 [2024-11-19 13:19:55.469849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.240 qpair failed and we were unable to recover it. 00:27:52.240 [2024-11-19 13:19:55.479844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.240 [2024-11-19 13:19:55.479905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.240 [2024-11-19 13:19:55.479919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.240 [2024-11-19 13:19:55.479926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.240 [2024-11-19 13:19:55.479932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.241 [2024-11-19 13:19:55.479950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.241 qpair failed and we were unable to recover it. 00:27:52.241 [2024-11-19 13:19:55.489854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.241 [2024-11-19 13:19:55.489911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.241 [2024-11-19 13:19:55.489925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.241 [2024-11-19 13:19:55.489931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.241 [2024-11-19 13:19:55.489937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.241 [2024-11-19 13:19:55.489955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.241 qpair failed and we were unable to recover it. 00:27:52.241 [2024-11-19 13:19:55.499847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.241 [2024-11-19 13:19:55.499897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.241 [2024-11-19 13:19:55.499911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.241 [2024-11-19 13:19:55.499917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.241 [2024-11-19 13:19:55.499924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.241 [2024-11-19 13:19:55.499938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.241 qpair failed and we were unable to recover it. 00:27:52.241 [2024-11-19 13:19:55.509831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.241 [2024-11-19 13:19:55.509927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.241 [2024-11-19 13:19:55.509941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.241 [2024-11-19 13:19:55.509950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.241 [2024-11-19 13:19:55.509956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.241 [2024-11-19 13:19:55.509972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.241 qpair failed and we were unable to recover it. 00:27:52.241 [2024-11-19 13:19:55.519912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.241 [2024-11-19 13:19:55.519963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.241 [2024-11-19 13:19:55.519976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.241 [2024-11-19 13:19:55.519983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.241 [2024-11-19 13:19:55.519989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.241 [2024-11-19 13:19:55.520004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.241 qpair failed and we were unable to recover it. 00:27:52.241 [2024-11-19 13:19:55.529942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.241 [2024-11-19 13:19:55.530003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.241 [2024-11-19 13:19:55.530017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.241 [2024-11-19 13:19:55.530027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.241 [2024-11-19 13:19:55.530033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.241 [2024-11-19 13:19:55.530048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.241 qpair failed and we were unable to recover it. 00:27:52.241 [2024-11-19 13:19:55.539969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.241 [2024-11-19 13:19:55.540021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.241 [2024-11-19 13:19:55.540034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.241 [2024-11-19 13:19:55.540040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.241 [2024-11-19 13:19:55.540047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.241 [2024-11-19 13:19:55.540062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.241 qpair failed and we were unable to recover it. 00:27:52.241 [2024-11-19 13:19:55.549998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.241 [2024-11-19 13:19:55.550052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.241 [2024-11-19 13:19:55.550065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.241 [2024-11-19 13:19:55.550072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.241 [2024-11-19 13:19:55.550078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.241 [2024-11-19 13:19:55.550093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.241 qpair failed and we were unable to recover it. 00:27:52.241 [2024-11-19 13:19:55.560006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.241 [2024-11-19 13:19:55.560055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.241 [2024-11-19 13:19:55.560069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.241 [2024-11-19 13:19:55.560075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.241 [2024-11-19 13:19:55.560082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.241 [2024-11-19 13:19:55.560096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.241 qpair failed and we were unable to recover it. 00:27:52.241 [2024-11-19 13:19:55.570047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.241 [2024-11-19 13:19:55.570099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.241 [2024-11-19 13:19:55.570112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.241 [2024-11-19 13:19:55.570118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.241 [2024-11-19 13:19:55.570124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.241 [2024-11-19 13:19:55.570142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.241 qpair failed and we were unable to recover it. 00:27:52.241 [2024-11-19 13:19:55.580091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.241 [2024-11-19 13:19:55.580148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.241 [2024-11-19 13:19:55.580161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.241 [2024-11-19 13:19:55.580168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.241 [2024-11-19 13:19:55.580174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.241 [2024-11-19 13:19:55.580188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.241 qpair failed and we were unable to recover it. 00:27:52.241 [2024-11-19 13:19:55.590111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.241 [2024-11-19 13:19:55.590173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.241 [2024-11-19 13:19:55.590186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.241 [2024-11-19 13:19:55.590192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.241 [2024-11-19 13:19:55.590198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.241 [2024-11-19 13:19:55.590213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.241 qpair failed and we were unable to recover it. 00:27:52.241 [2024-11-19 13:19:55.600129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.241 [2024-11-19 13:19:55.600179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.241 [2024-11-19 13:19:55.600192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.241 [2024-11-19 13:19:55.600199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.241 [2024-11-19 13:19:55.600205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.241 [2024-11-19 13:19:55.600220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.241 qpair failed and we were unable to recover it. 00:27:52.241 [2024-11-19 13:19:55.610096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.241 [2024-11-19 13:19:55.610173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.241 [2024-11-19 13:19:55.610186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.242 [2024-11-19 13:19:55.610192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.242 [2024-11-19 13:19:55.610198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.242 [2024-11-19 13:19:55.610213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.242 qpair failed and we were unable to recover it. 00:27:52.501 [2024-11-19 13:19:55.620198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.501 [2024-11-19 13:19:55.620277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.501 [2024-11-19 13:19:55.620291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.501 [2024-11-19 13:19:55.620297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.501 [2024-11-19 13:19:55.620303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.501 [2024-11-19 13:19:55.620318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.501 qpair failed and we were unable to recover it. 00:27:52.501 [2024-11-19 13:19:55.630223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.501 [2024-11-19 13:19:55.630275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.501 [2024-11-19 13:19:55.630289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.501 [2024-11-19 13:19:55.630295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.501 [2024-11-19 13:19:55.630301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.501 [2024-11-19 13:19:55.630315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.501 qpair failed and we were unable to recover it. 00:27:52.501 [2024-11-19 13:19:55.640253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.501 [2024-11-19 13:19:55.640306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.501 [2024-11-19 13:19:55.640319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.501 [2024-11-19 13:19:55.640326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.501 [2024-11-19 13:19:55.640332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.501 [2024-11-19 13:19:55.640347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.501 qpair failed and we were unable to recover it. 00:27:52.501 [2024-11-19 13:19:55.650293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.501 [2024-11-19 13:19:55.650387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.501 [2024-11-19 13:19:55.650400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.501 [2024-11-19 13:19:55.650407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.501 [2024-11-19 13:19:55.650412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.501 [2024-11-19 13:19:55.650427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.501 qpair failed and we were unable to recover it. 00:27:52.501 [2024-11-19 13:19:55.660295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.501 [2024-11-19 13:19:55.660366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.501 [2024-11-19 13:19:55.660383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.502 [2024-11-19 13:19:55.660390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.502 [2024-11-19 13:19:55.660395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.502 [2024-11-19 13:19:55.660410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.502 qpair failed and we were unable to recover it. 00:27:52.502 [2024-11-19 13:19:55.670312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.502 [2024-11-19 13:19:55.670362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.502 [2024-11-19 13:19:55.670376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.502 [2024-11-19 13:19:55.670383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.502 [2024-11-19 13:19:55.670389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.502 [2024-11-19 13:19:55.670403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.502 qpair failed and we were unable to recover it. 00:27:52.502 [2024-11-19 13:19:55.680389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.502 [2024-11-19 13:19:55.680446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.502 [2024-11-19 13:19:55.680459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.502 [2024-11-19 13:19:55.680466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.502 [2024-11-19 13:19:55.680472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.502 [2024-11-19 13:19:55.680486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.502 qpair failed and we were unable to recover it. 00:27:52.502 [2024-11-19 13:19:55.690384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.502 [2024-11-19 13:19:55.690437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.502 [2024-11-19 13:19:55.690451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.502 [2024-11-19 13:19:55.690457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.502 [2024-11-19 13:19:55.690463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.502 [2024-11-19 13:19:55.690478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.502 qpair failed and we were unable to recover it. 00:27:52.502 [2024-11-19 13:19:55.700415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.502 [2024-11-19 13:19:55.700470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.502 [2024-11-19 13:19:55.700483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.502 [2024-11-19 13:19:55.700490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.502 [2024-11-19 13:19:55.700499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.502 [2024-11-19 13:19:55.700514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.502 qpair failed and we were unable to recover it. 00:27:52.502 [2024-11-19 13:19:55.710434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.502 [2024-11-19 13:19:55.710489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.502 [2024-11-19 13:19:55.710503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.502 [2024-11-19 13:19:55.710509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.502 [2024-11-19 13:19:55.710515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.502 [2024-11-19 13:19:55.710530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.502 qpair failed and we were unable to recover it. 00:27:52.502 [2024-11-19 13:19:55.720501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.502 [2024-11-19 13:19:55.720560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.502 [2024-11-19 13:19:55.720573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.502 [2024-11-19 13:19:55.720580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.502 [2024-11-19 13:19:55.720586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.502 [2024-11-19 13:19:55.720601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.502 qpair failed and we were unable to recover it. 00:27:52.502 [2024-11-19 13:19:55.730507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.502 [2024-11-19 13:19:55.730563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.502 [2024-11-19 13:19:55.730576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.502 [2024-11-19 13:19:55.730582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.502 [2024-11-19 13:19:55.730588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.502 [2024-11-19 13:19:55.730603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.502 qpair failed and we were unable to recover it. 00:27:52.502 [2024-11-19 13:19:55.740518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.502 [2024-11-19 13:19:55.740568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.502 [2024-11-19 13:19:55.740581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.502 [2024-11-19 13:19:55.740588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.502 [2024-11-19 13:19:55.740594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.502 [2024-11-19 13:19:55.740609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.502 qpair failed and we were unable to recover it. 00:27:52.502 [2024-11-19 13:19:55.750583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.502 [2024-11-19 13:19:55.750637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.502 [2024-11-19 13:19:55.750651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.502 [2024-11-19 13:19:55.750658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.502 [2024-11-19 13:19:55.750664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.502 [2024-11-19 13:19:55.750679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.502 qpair failed and we were unable to recover it. 00:27:52.502 [2024-11-19 13:19:55.760618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.502 [2024-11-19 13:19:55.760669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.502 [2024-11-19 13:19:55.760682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.502 [2024-11-19 13:19:55.760689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.502 [2024-11-19 13:19:55.760695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.502 [2024-11-19 13:19:55.760710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.502 qpair failed and we were unable to recover it. 00:27:52.502 [2024-11-19 13:19:55.770609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.502 [2024-11-19 13:19:55.770666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.502 [2024-11-19 13:19:55.770680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.502 [2024-11-19 13:19:55.770687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.502 [2024-11-19 13:19:55.770693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.502 [2024-11-19 13:19:55.770708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.502 qpair failed and we were unable to recover it. 00:27:52.502 [2024-11-19 13:19:55.780552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.502 [2024-11-19 13:19:55.780636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.502 [2024-11-19 13:19:55.780650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.502 [2024-11-19 13:19:55.780657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.502 [2024-11-19 13:19:55.780663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.502 [2024-11-19 13:19:55.780677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.502 qpair failed and we were unable to recover it. 00:27:52.502 [2024-11-19 13:19:55.790671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.502 [2024-11-19 13:19:55.790761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.502 [2024-11-19 13:19:55.790777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.503 [2024-11-19 13:19:55.790784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.503 [2024-11-19 13:19:55.790790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.503 [2024-11-19 13:19:55.790806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.503 qpair failed and we were unable to recover it. 00:27:52.503 [2024-11-19 13:19:55.800677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.503 [2024-11-19 13:19:55.800733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.503 [2024-11-19 13:19:55.800747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.503 [2024-11-19 13:19:55.800754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.503 [2024-11-19 13:19:55.800759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.503 [2024-11-19 13:19:55.800774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.503 qpair failed and we were unable to recover it. 00:27:52.503 [2024-11-19 13:19:55.810637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.503 [2024-11-19 13:19:55.810699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.503 [2024-11-19 13:19:55.810712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.503 [2024-11-19 13:19:55.810719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.503 [2024-11-19 13:19:55.810725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.503 [2024-11-19 13:19:55.810739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.503 qpair failed and we were unable to recover it. 00:27:52.503 [2024-11-19 13:19:55.820758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.503 [2024-11-19 13:19:55.820842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.503 [2024-11-19 13:19:55.820855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.503 [2024-11-19 13:19:55.820862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.503 [2024-11-19 13:19:55.820867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.503 [2024-11-19 13:19:55.820882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.503 qpair failed and we were unable to recover it. 00:27:52.503 [2024-11-19 13:19:55.830777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.503 [2024-11-19 13:19:55.830869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.503 [2024-11-19 13:19:55.830882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.503 [2024-11-19 13:19:55.830889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.503 [2024-11-19 13:19:55.830898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.503 [2024-11-19 13:19:55.830912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.503 qpair failed and we were unable to recover it. 00:27:52.503 [2024-11-19 13:19:55.840838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.503 [2024-11-19 13:19:55.840919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.503 [2024-11-19 13:19:55.840933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.503 [2024-11-19 13:19:55.840940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.503 [2024-11-19 13:19:55.840945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.503 [2024-11-19 13:19:55.840964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.503 qpair failed and we were unable to recover it. 00:27:52.503 [2024-11-19 13:19:55.850837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.503 [2024-11-19 13:19:55.850895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.503 [2024-11-19 13:19:55.850908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.503 [2024-11-19 13:19:55.850915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.503 [2024-11-19 13:19:55.850921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.503 [2024-11-19 13:19:55.850935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.503 qpair failed and we were unable to recover it. 00:27:52.503 [2024-11-19 13:19:55.860851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.503 [2024-11-19 13:19:55.860905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.503 [2024-11-19 13:19:55.860918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.503 [2024-11-19 13:19:55.860925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.503 [2024-11-19 13:19:55.860931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.503 [2024-11-19 13:19:55.860945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.503 qpair failed and we were unable to recover it. 00:27:52.503 [2024-11-19 13:19:55.870934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.503 [2024-11-19 13:19:55.870989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.503 [2024-11-19 13:19:55.871003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.503 [2024-11-19 13:19:55.871009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.503 [2024-11-19 13:19:55.871015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.503 [2024-11-19 13:19:55.871030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.503 qpair failed and we were unable to recover it. 00:27:52.763 [2024-11-19 13:19:55.880849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.763 [2024-11-19 13:19:55.880903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.763 [2024-11-19 13:19:55.880916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.763 [2024-11-19 13:19:55.880923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.763 [2024-11-19 13:19:55.880929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.763 [2024-11-19 13:19:55.880944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.763 qpair failed and we were unable to recover it. 00:27:52.763 [2024-11-19 13:19:55.890929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.763 [2024-11-19 13:19:55.891007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.763 [2024-11-19 13:19:55.891021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.763 [2024-11-19 13:19:55.891027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.763 [2024-11-19 13:19:55.891034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.763 [2024-11-19 13:19:55.891048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.763 qpair failed and we were unable to recover it. 00:27:52.763 [2024-11-19 13:19:55.900963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.763 [2024-11-19 13:19:55.901048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.763 [2024-11-19 13:19:55.901062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.763 [2024-11-19 13:19:55.901068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.763 [2024-11-19 13:19:55.901075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.763 [2024-11-19 13:19:55.901089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.763 qpair failed and we were unable to recover it. 00:27:52.763 [2024-11-19 13:19:55.910971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.763 [2024-11-19 13:19:55.911024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.763 [2024-11-19 13:19:55.911038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.763 [2024-11-19 13:19:55.911044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.763 [2024-11-19 13:19:55.911050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.763 [2024-11-19 13:19:55.911065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.763 qpair failed and we were unable to recover it. 00:27:52.763 [2024-11-19 13:19:55.920957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.763 [2024-11-19 13:19:55.921015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.763 [2024-11-19 13:19:55.921031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.763 [2024-11-19 13:19:55.921038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.763 [2024-11-19 13:19:55.921044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.763 [2024-11-19 13:19:55.921059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.763 qpair failed and we were unable to recover it. 00:27:52.764 [2024-11-19 13:19:55.931011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.764 [2024-11-19 13:19:55.931071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.764 [2024-11-19 13:19:55.931085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.764 [2024-11-19 13:19:55.931092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.764 [2024-11-19 13:19:55.931098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.764 [2024-11-19 13:19:55.931112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.764 qpair failed and we were unable to recover it. 00:27:52.764 [2024-11-19 13:19:55.941013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.764 [2024-11-19 13:19:55.941072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.764 [2024-11-19 13:19:55.941085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.764 [2024-11-19 13:19:55.941092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.764 [2024-11-19 13:19:55.941098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.764 [2024-11-19 13:19:55.941112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.764 qpair failed and we were unable to recover it. 00:27:52.764 [2024-11-19 13:19:55.951120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.764 [2024-11-19 13:19:55.951217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.764 [2024-11-19 13:19:55.951230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.764 [2024-11-19 13:19:55.951237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.764 [2024-11-19 13:19:55.951243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.764 [2024-11-19 13:19:55.951258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.764 qpair failed and we were unable to recover it. 00:27:52.764 [2024-11-19 13:19:55.961115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.764 [2024-11-19 13:19:55.961214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.764 [2024-11-19 13:19:55.961228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.764 [2024-11-19 13:19:55.961237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.764 [2024-11-19 13:19:55.961243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.764 [2024-11-19 13:19:55.961258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.764 qpair failed and we were unable to recover it. 00:27:52.764 [2024-11-19 13:19:55.971182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.764 [2024-11-19 13:19:55.971245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.764 [2024-11-19 13:19:55.971259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.764 [2024-11-19 13:19:55.971266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.764 [2024-11-19 13:19:55.971271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.764 [2024-11-19 13:19:55.971287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.764 qpair failed and we were unable to recover it. 00:27:52.764 [2024-11-19 13:19:55.981227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.764 [2024-11-19 13:19:55.981283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.764 [2024-11-19 13:19:55.981297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.764 [2024-11-19 13:19:55.981303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.764 [2024-11-19 13:19:55.981310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.764 [2024-11-19 13:19:55.981324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.764 qpair failed and we were unable to recover it. 00:27:52.764 [2024-11-19 13:19:55.991141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.764 [2024-11-19 13:19:55.991210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.764 [2024-11-19 13:19:55.991224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.764 [2024-11-19 13:19:55.991230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.764 [2024-11-19 13:19:55.991237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.764 [2024-11-19 13:19:55.991251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.764 qpair failed and we were unable to recover it. 00:27:52.764 [2024-11-19 13:19:56.001176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.764 [2024-11-19 13:19:56.001235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.764 [2024-11-19 13:19:56.001249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.764 [2024-11-19 13:19:56.001256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.764 [2024-11-19 13:19:56.001262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.764 [2024-11-19 13:19:56.001280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.764 qpair failed and we were unable to recover it. 00:27:52.764 [2024-11-19 13:19:56.011276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.764 [2024-11-19 13:19:56.011330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.764 [2024-11-19 13:19:56.011344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.764 [2024-11-19 13:19:56.011351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.764 [2024-11-19 13:19:56.011357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.764 [2024-11-19 13:19:56.011372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.764 qpair failed and we were unable to recover it. 00:27:52.764 [2024-11-19 13:19:56.021295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.764 [2024-11-19 13:19:56.021350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.764 [2024-11-19 13:19:56.021364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.764 [2024-11-19 13:19:56.021371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.764 [2024-11-19 13:19:56.021377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.764 [2024-11-19 13:19:56.021391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.764 qpair failed and we were unable to recover it. 00:27:52.764 [2024-11-19 13:19:56.031348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.764 [2024-11-19 13:19:56.031402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.764 [2024-11-19 13:19:56.031415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.764 [2024-11-19 13:19:56.031422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.764 [2024-11-19 13:19:56.031428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.764 [2024-11-19 13:19:56.031442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.764 qpair failed and we were unable to recover it. 00:27:52.764 [2024-11-19 13:19:56.041387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.764 [2024-11-19 13:19:56.041441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.764 [2024-11-19 13:19:56.041455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.764 [2024-11-19 13:19:56.041462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.764 [2024-11-19 13:19:56.041468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.764 [2024-11-19 13:19:56.041482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.764 qpair failed and we were unable to recover it. 00:27:52.764 [2024-11-19 13:19:56.051333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.764 [2024-11-19 13:19:56.051429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.764 [2024-11-19 13:19:56.051443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.764 [2024-11-19 13:19:56.051450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.764 [2024-11-19 13:19:56.051455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.764 [2024-11-19 13:19:56.051469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.765 qpair failed and we were unable to recover it. 00:27:52.765 [2024-11-19 13:19:56.061376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.765 [2024-11-19 13:19:56.061476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.765 [2024-11-19 13:19:56.061489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.765 [2024-11-19 13:19:56.061496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.765 [2024-11-19 13:19:56.061502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.765 [2024-11-19 13:19:56.061516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.765 qpair failed and we were unable to recover it. 00:27:52.765 [2024-11-19 13:19:56.071438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.765 [2024-11-19 13:19:56.071515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.765 [2024-11-19 13:19:56.071528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.765 [2024-11-19 13:19:56.071535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.765 [2024-11-19 13:19:56.071541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.765 [2024-11-19 13:19:56.071555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.765 qpair failed and we were unable to recover it. 00:27:52.765 [2024-11-19 13:19:56.081454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.765 [2024-11-19 13:19:56.081552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.765 [2024-11-19 13:19:56.081565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.765 [2024-11-19 13:19:56.081572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.765 [2024-11-19 13:19:56.081577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.765 [2024-11-19 13:19:56.081592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.765 qpair failed and we were unable to recover it. 00:27:52.765 [2024-11-19 13:19:56.091487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.765 [2024-11-19 13:19:56.091579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.765 [2024-11-19 13:19:56.091593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.765 [2024-11-19 13:19:56.091603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.765 [2024-11-19 13:19:56.091609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.765 [2024-11-19 13:19:56.091623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.765 qpair failed and we were unable to recover it. 00:27:52.765 [2024-11-19 13:19:56.101502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.765 [2024-11-19 13:19:56.101584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.765 [2024-11-19 13:19:56.101597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.765 [2024-11-19 13:19:56.101604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.765 [2024-11-19 13:19:56.101610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.765 [2024-11-19 13:19:56.101624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.765 qpair failed and we were unable to recover it. 00:27:52.765 [2024-11-19 13:19:56.111561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.765 [2024-11-19 13:19:56.111611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.765 [2024-11-19 13:19:56.111625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.765 [2024-11-19 13:19:56.111631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.765 [2024-11-19 13:19:56.111637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.765 [2024-11-19 13:19:56.111651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.765 qpair failed and we were unable to recover it. 00:27:52.765 [2024-11-19 13:19:56.121525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.765 [2024-11-19 13:19:56.121575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.765 [2024-11-19 13:19:56.121588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.765 [2024-11-19 13:19:56.121594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.765 [2024-11-19 13:19:56.121600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.765 [2024-11-19 13:19:56.121615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.765 qpair failed and we were unable to recover it. 00:27:52.765 [2024-11-19 13:19:56.131585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:52.765 [2024-11-19 13:19:56.131671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:52.765 [2024-11-19 13:19:56.131685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:52.765 [2024-11-19 13:19:56.131691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:52.765 [2024-11-19 13:19:56.131697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:52.765 [2024-11-19 13:19:56.131715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.765 qpair failed and we were unable to recover it. 00:27:53.025 [2024-11-19 13:19:56.141611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.025 [2024-11-19 13:19:56.141681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.025 [2024-11-19 13:19:56.141695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.025 [2024-11-19 13:19:56.141702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.025 [2024-11-19 13:19:56.141708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.025 [2024-11-19 13:19:56.141723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.025 qpair failed and we were unable to recover it. 00:27:53.025 [2024-11-19 13:19:56.151678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.025 [2024-11-19 13:19:56.151731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.025 [2024-11-19 13:19:56.151744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.025 [2024-11-19 13:19:56.151750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.025 [2024-11-19 13:19:56.151757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.025 [2024-11-19 13:19:56.151771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.025 qpair failed and we were unable to recover it. 00:27:53.025 [2024-11-19 13:19:56.161633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.025 [2024-11-19 13:19:56.161686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.025 [2024-11-19 13:19:56.161699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.025 [2024-11-19 13:19:56.161706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.025 [2024-11-19 13:19:56.161712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.025 [2024-11-19 13:19:56.161727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.025 qpair failed and we were unable to recover it. 00:27:53.025 [2024-11-19 13:19:56.171745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.025 [2024-11-19 13:19:56.171801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.025 [2024-11-19 13:19:56.171815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.025 [2024-11-19 13:19:56.171821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.025 [2024-11-19 13:19:56.171827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.025 [2024-11-19 13:19:56.171842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.025 qpair failed and we were unable to recover it. 00:27:53.025 [2024-11-19 13:19:56.181749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.025 [2024-11-19 13:19:56.181806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.025 [2024-11-19 13:19:56.181820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.025 [2024-11-19 13:19:56.181826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.025 [2024-11-19 13:19:56.181832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.025 [2024-11-19 13:19:56.181847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.025 qpair failed and we were unable to recover it. 00:27:53.025 [2024-11-19 13:19:56.191777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.025 [2024-11-19 13:19:56.191832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.025 [2024-11-19 13:19:56.191846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.025 [2024-11-19 13:19:56.191852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.025 [2024-11-19 13:19:56.191858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.025 [2024-11-19 13:19:56.191873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.025 qpair failed and we were unable to recover it. 00:27:53.025 [2024-11-19 13:19:56.201803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.025 [2024-11-19 13:19:56.201855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.025 [2024-11-19 13:19:56.201868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.025 [2024-11-19 13:19:56.201875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.025 [2024-11-19 13:19:56.201881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.025 [2024-11-19 13:19:56.201895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.025 qpair failed and we were unable to recover it. 00:27:53.025 [2024-11-19 13:19:56.211787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.025 [2024-11-19 13:19:56.211843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.025 [2024-11-19 13:19:56.211856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.025 [2024-11-19 13:19:56.211863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.025 [2024-11-19 13:19:56.211869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.025 [2024-11-19 13:19:56.211883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.025 qpair failed and we were unable to recover it. 00:27:53.025 [2024-11-19 13:19:56.221884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.025 [2024-11-19 13:19:56.221940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.025 [2024-11-19 13:19:56.221960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.025 [2024-11-19 13:19:56.221967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.025 [2024-11-19 13:19:56.221973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.025 [2024-11-19 13:19:56.221988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.025 qpair failed and we were unable to recover it. 00:27:53.026 [2024-11-19 13:19:56.231858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.026 [2024-11-19 13:19:56.231953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.026 [2024-11-19 13:19:56.231966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.026 [2024-11-19 13:19:56.231973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.026 [2024-11-19 13:19:56.231979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.026 [2024-11-19 13:19:56.231994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.026 qpair failed and we were unable to recover it. 00:27:53.026 [2024-11-19 13:19:56.241879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.026 [2024-11-19 13:19:56.241931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.026 [2024-11-19 13:19:56.241945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.026 [2024-11-19 13:19:56.241956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.026 [2024-11-19 13:19:56.241962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.026 [2024-11-19 13:19:56.241977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.026 qpair failed and we were unable to recover it. 00:27:53.026 [2024-11-19 13:19:56.251942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.026 [2024-11-19 13:19:56.252033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.026 [2024-11-19 13:19:56.252046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.026 [2024-11-19 13:19:56.252053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.026 [2024-11-19 13:19:56.252059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.026 [2024-11-19 13:19:56.252073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.026 qpair failed and we were unable to recover it. 00:27:53.026 [2024-11-19 13:19:56.262011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.026 [2024-11-19 13:19:56.262065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.026 [2024-11-19 13:19:56.262078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.026 [2024-11-19 13:19:56.262084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.026 [2024-11-19 13:19:56.262094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.026 [2024-11-19 13:19:56.262109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.026 qpair failed and we were unable to recover it. 00:27:53.026 [2024-11-19 13:19:56.271951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.026 [2024-11-19 13:19:56.272005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.026 [2024-11-19 13:19:56.272018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.026 [2024-11-19 13:19:56.272024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.026 [2024-11-19 13:19:56.272030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.026 [2024-11-19 13:19:56.272045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.026 qpair failed and we were unable to recover it. 00:27:53.026 [2024-11-19 13:19:56.282077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.026 [2024-11-19 13:19:56.282136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.026 [2024-11-19 13:19:56.282150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.026 [2024-11-19 13:19:56.282158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.026 [2024-11-19 13:19:56.282164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.026 [2024-11-19 13:19:56.282178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.026 qpair failed and we were unable to recover it. 00:27:53.026 [2024-11-19 13:19:56.292099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.026 [2024-11-19 13:19:56.292156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.026 [2024-11-19 13:19:56.292168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.026 [2024-11-19 13:19:56.292175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.026 [2024-11-19 13:19:56.292181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.026 [2024-11-19 13:19:56.292195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.026 qpair failed and we were unable to recover it. 00:27:53.026 [2024-11-19 13:19:56.302052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.026 [2024-11-19 13:19:56.302107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.026 [2024-11-19 13:19:56.302121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.026 [2024-11-19 13:19:56.302128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.026 [2024-11-19 13:19:56.302134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.026 [2024-11-19 13:19:56.302149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.026 qpair failed and we were unable to recover it. 00:27:53.026 [2024-11-19 13:19:56.312149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.026 [2024-11-19 13:19:56.312205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.026 [2024-11-19 13:19:56.312219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.026 [2024-11-19 13:19:56.312225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.026 [2024-11-19 13:19:56.312231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.026 [2024-11-19 13:19:56.312246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.026 qpair failed and we were unable to recover it. 00:27:53.026 [2024-11-19 13:19:56.322205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.026 [2024-11-19 13:19:56.322267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.026 [2024-11-19 13:19:56.322281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.026 [2024-11-19 13:19:56.322288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.026 [2024-11-19 13:19:56.322294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.026 [2024-11-19 13:19:56.322309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.026 qpair failed and we were unable to recover it. 00:27:53.026 [2024-11-19 13:19:56.332229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.026 [2024-11-19 13:19:56.332282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.026 [2024-11-19 13:19:56.332295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.026 [2024-11-19 13:19:56.332302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.026 [2024-11-19 13:19:56.332308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.026 [2024-11-19 13:19:56.332323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.026 qpair failed and we were unable to recover it. 00:27:53.026 [2024-11-19 13:19:56.342215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.026 [2024-11-19 13:19:56.342273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.026 [2024-11-19 13:19:56.342286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.026 [2024-11-19 13:19:56.342293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.026 [2024-11-19 13:19:56.342299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.026 [2024-11-19 13:19:56.342313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.026 qpair failed and we were unable to recover it. 00:27:53.026 [2024-11-19 13:19:56.352294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.026 [2024-11-19 13:19:56.352353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.026 [2024-11-19 13:19:56.352370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.026 [2024-11-19 13:19:56.352376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.026 [2024-11-19 13:19:56.352382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.026 [2024-11-19 13:19:56.352396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.026 qpair failed and we were unable to recover it. 00:27:53.026 [2024-11-19 13:19:56.362293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.027 [2024-11-19 13:19:56.362394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.027 [2024-11-19 13:19:56.362408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.027 [2024-11-19 13:19:56.362414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.027 [2024-11-19 13:19:56.362420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.027 [2024-11-19 13:19:56.362435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.027 qpair failed and we were unable to recover it. 00:27:53.027 [2024-11-19 13:19:56.372343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.027 [2024-11-19 13:19:56.372430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.027 [2024-11-19 13:19:56.372443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.027 [2024-11-19 13:19:56.372450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.027 [2024-11-19 13:19:56.372455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.027 [2024-11-19 13:19:56.372470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.027 qpair failed and we were unable to recover it. 00:27:53.027 [2024-11-19 13:19:56.382344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.027 [2024-11-19 13:19:56.382415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.027 [2024-11-19 13:19:56.382429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.027 [2024-11-19 13:19:56.382435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.027 [2024-11-19 13:19:56.382441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.027 [2024-11-19 13:19:56.382456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.027 qpair failed and we were unable to recover it. 00:27:53.027 [2024-11-19 13:19:56.392314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.027 [2024-11-19 13:19:56.392417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.027 [2024-11-19 13:19:56.392431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.027 [2024-11-19 13:19:56.392438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.027 [2024-11-19 13:19:56.392447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.027 [2024-11-19 13:19:56.392462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.027 qpair failed and we were unable to recover it. 00:27:53.287 [2024-11-19 13:19:56.402405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.287 [2024-11-19 13:19:56.402461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.287 [2024-11-19 13:19:56.402475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.287 [2024-11-19 13:19:56.402482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.287 [2024-11-19 13:19:56.402488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.287 [2024-11-19 13:19:56.402504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.287 qpair failed and we were unable to recover it. 00:27:53.287 [2024-11-19 13:19:56.412441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.287 [2024-11-19 13:19:56.412497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.287 [2024-11-19 13:19:56.412511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.287 [2024-11-19 13:19:56.412517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.287 [2024-11-19 13:19:56.412523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.287 [2024-11-19 13:19:56.412538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.287 qpair failed and we were unable to recover it. 00:27:53.287 [2024-11-19 13:19:56.422465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.287 [2024-11-19 13:19:56.422523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.287 [2024-11-19 13:19:56.422536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.287 [2024-11-19 13:19:56.422543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.287 [2024-11-19 13:19:56.422549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.287 [2024-11-19 13:19:56.422563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.287 qpair failed and we were unable to recover it. 00:27:53.287 [2024-11-19 13:19:56.432494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.287 [2024-11-19 13:19:56.432550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.287 [2024-11-19 13:19:56.432564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.287 [2024-11-19 13:19:56.432572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.287 [2024-11-19 13:19:56.432578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.287 [2024-11-19 13:19:56.432593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.287 qpair failed and we were unable to recover it. 00:27:53.287 [2024-11-19 13:19:56.442530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.287 [2024-11-19 13:19:56.442584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.287 [2024-11-19 13:19:56.442597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.287 [2024-11-19 13:19:56.442604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.287 [2024-11-19 13:19:56.442610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.287 [2024-11-19 13:19:56.442625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.287 qpair failed and we were unable to recover it. 00:27:53.287 [2024-11-19 13:19:56.452528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.287 [2024-11-19 13:19:56.452610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.287 [2024-11-19 13:19:56.452624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.287 [2024-11-19 13:19:56.452630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.287 [2024-11-19 13:19:56.452636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.287 [2024-11-19 13:19:56.452651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.287 qpair failed and we were unable to recover it. 00:27:53.287 [2024-11-19 13:19:56.462507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.287 [2024-11-19 13:19:56.462566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.287 [2024-11-19 13:19:56.462579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.287 [2024-11-19 13:19:56.462586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.287 [2024-11-19 13:19:56.462592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.287 [2024-11-19 13:19:56.462607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.287 qpair failed and we were unable to recover it. 00:27:53.287 [2024-11-19 13:19:56.472666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.287 [2024-11-19 13:19:56.472722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.287 [2024-11-19 13:19:56.472736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.287 [2024-11-19 13:19:56.472743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.287 [2024-11-19 13:19:56.472749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.287 [2024-11-19 13:19:56.472763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.287 qpair failed and we were unable to recover it. 00:27:53.287 [2024-11-19 13:19:56.482601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.287 [2024-11-19 13:19:56.482660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.287 [2024-11-19 13:19:56.482674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.287 [2024-11-19 13:19:56.482681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.287 [2024-11-19 13:19:56.482687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.287 [2024-11-19 13:19:56.482702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.287 qpair failed and we were unable to recover it. 00:27:53.287 [2024-11-19 13:19:56.492667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.287 [2024-11-19 13:19:56.492727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.287 [2024-11-19 13:19:56.492740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.287 [2024-11-19 13:19:56.492747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.287 [2024-11-19 13:19:56.492753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.287 [2024-11-19 13:19:56.492768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.287 qpair failed and we were unable to recover it. 00:27:53.287 [2024-11-19 13:19:56.502687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.287 [2024-11-19 13:19:56.502736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.288 [2024-11-19 13:19:56.502749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.288 [2024-11-19 13:19:56.502756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.288 [2024-11-19 13:19:56.502762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.288 [2024-11-19 13:19:56.502776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.288 qpair failed and we were unable to recover it. 00:27:53.288 [2024-11-19 13:19:56.512717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.288 [2024-11-19 13:19:56.512771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.288 [2024-11-19 13:19:56.512785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.288 [2024-11-19 13:19:56.512792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.288 [2024-11-19 13:19:56.512798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0198000b90 00:27:53.288 [2024-11-19 13:19:56.512813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.288 qpair failed and we were unable to recover it. 00:27:53.288 [2024-11-19 13:19:56.522768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.288 [2024-11-19 13:19:56.522882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.288 [2024-11-19 13:19:56.522940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.288 [2024-11-19 13:19:56.522988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.288 [2024-11-19 13:19:56.523011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f019c000b90 00:27:53.288 [2024-11-19 13:19:56.523062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:53.288 qpair failed and we were unable to recover it. 00:27:53.288 [2024-11-19 13:19:56.532843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.288 [2024-11-19 13:19:56.532921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.288 [2024-11-19 13:19:56.532958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.288 [2024-11-19 13:19:56.532973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.288 [2024-11-19 13:19:56.532986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f019c000b90 00:27:53.288 [2024-11-19 13:19:56.533016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:53.288 qpair failed and we were unable to recover it. 00:27:53.288 [2024-11-19 13:19:56.533130] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:27:53.288 A controller has encountered a failure and is being reset. 00:27:53.288 [2024-11-19 13:19:56.542980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.288 [2024-11-19 13:19:56.543080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.288 [2024-11-19 13:19:56.543136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.288 [2024-11-19 13:19:56.543162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.288 [2024-11-19 13:19:56.543183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ccba0 00:27:53.288 [2024-11-19 13:19:56.543235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:53.288 qpair failed and we were unable to recover it. 00:27:53.288 [2024-11-19 13:19:56.552829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:53.288 [2024-11-19 13:19:56.552904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:53.288 [2024-11-19 13:19:56.552932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:53.288 [2024-11-19 13:19:56.552953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:53.288 [2024-11-19 13:19:56.552967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18ccba0 00:27:53.288 [2024-11-19 13:19:56.552998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:53.288 qpair failed and we were unable to recover it. 00:27:53.288 Controller properly reset. 00:27:53.288 Initializing NVMe Controllers 00:27:53.288 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:53.288 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:53.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:53.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:53.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:53.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:53.288 Initialization complete. Launching workers. 00:27:53.288 Starting thread on core 1 00:27:53.288 Starting thread on core 2 00:27:53.288 Starting thread on core 3 00:27:53.288 Starting thread on core 0 00:27:53.288 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:53.288 00:27:53.288 real 0m10.755s 00:27:53.288 user 0m19.387s 00:27:53.288 sys 0m4.707s 00:27:53.288 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:53.288 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.288 ************************************ 00:27:53.288 END TEST nvmf_target_disconnect_tc2 00:27:53.288 ************************************ 00:27:53.288 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:53.288 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:53.288 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:53.288 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:53.288 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:53.288 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:53.288 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:53.288 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:53.288 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:53.288 rmmod nvme_tcp 00:27:53.288 rmmod nvme_fabrics 00:27:53.548 rmmod nvme_keyring 00:27:53.548 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:53.548 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:53.548 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:53.548 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3004936 ']' 00:27:53.548 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3004936 00:27:53.548 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3004936 ']' 00:27:53.548 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3004936 00:27:53.548 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:27:53.548 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:53.548 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3004936 00:27:53.548 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:27:53.548 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:27:53.548 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3004936' 00:27:53.548 killing process with pid 3004936 00:27:53.548 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3004936 00:27:53.548 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3004936 00:27:53.807 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:53.807 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:53.807 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:53.807 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:53.807 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:53.807 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:53.807 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:53.807 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:53.807 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:53.807 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.807 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:53.807 13:19:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.715 13:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:55.715 00:27:55.715 real 0m19.546s 00:27:55.715 user 0m46.874s 00:27:55.715 sys 0m9.672s 00:27:55.715 13:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:55.715 13:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:55.715 ************************************ 00:27:55.715 END TEST nvmf_target_disconnect 00:27:55.715 ************************************ 00:27:55.715 13:19:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:55.715 00:27:55.715 real 5m51.659s 00:27:55.715 user 10m32.072s 00:27:55.715 sys 1m58.472s 00:27:55.715 13:19:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:55.715 13:19:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.715 ************************************ 00:27:55.715 END TEST nvmf_host 00:27:55.715 ************************************ 00:27:55.976 13:19:59 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:55.976 13:19:59 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:55.976 13:19:59 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:55.976 13:19:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:55.976 13:19:59 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:55.976 13:19:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:55.976 ************************************ 00:27:55.976 START TEST nvmf_target_core_interrupt_mode 00:27:55.976 ************************************ 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:55.976 * Looking for test storage... 00:27:55.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:55.976 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:55.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.977 --rc genhtml_branch_coverage=1 00:27:55.977 --rc genhtml_function_coverage=1 00:27:55.977 --rc genhtml_legend=1 00:27:55.977 --rc geninfo_all_blocks=1 00:27:55.977 --rc geninfo_unexecuted_blocks=1 00:27:55.977 00:27:55.977 ' 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:55.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.977 --rc genhtml_branch_coverage=1 00:27:55.977 --rc genhtml_function_coverage=1 00:27:55.977 --rc genhtml_legend=1 00:27:55.977 --rc geninfo_all_blocks=1 00:27:55.977 --rc geninfo_unexecuted_blocks=1 00:27:55.977 00:27:55.977 ' 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:55.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.977 --rc genhtml_branch_coverage=1 00:27:55.977 --rc genhtml_function_coverage=1 00:27:55.977 --rc genhtml_legend=1 00:27:55.977 --rc geninfo_all_blocks=1 00:27:55.977 --rc geninfo_unexecuted_blocks=1 00:27:55.977 00:27:55.977 ' 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:55.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.977 --rc genhtml_branch_coverage=1 00:27:55.977 --rc genhtml_function_coverage=1 00:27:55.977 --rc genhtml_legend=1 00:27:55.977 --rc geninfo_all_blocks=1 00:27:55.977 --rc geninfo_unexecuted_blocks=1 00:27:55.977 00:27:55.977 ' 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:55.977 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.978 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.978 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.978 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.978 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.978 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.978 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:55.978 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.978 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:55.978 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:55.978 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:55.978 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.978 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.978 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.978 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:55.978 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:55.978 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:55.978 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:55.979 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:55.979 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:55.979 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:55.979 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:55.979 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:55.979 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:55.979 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:55.979 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:56.238 ************************************ 00:27:56.238 START TEST nvmf_abort 00:27:56.238 ************************************ 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:56.238 * Looking for test storage... 00:27:56.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:56.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.238 --rc genhtml_branch_coverage=1 00:27:56.238 --rc genhtml_function_coverage=1 00:27:56.238 --rc genhtml_legend=1 00:27:56.238 --rc geninfo_all_blocks=1 00:27:56.238 --rc geninfo_unexecuted_blocks=1 00:27:56.238 00:27:56.238 ' 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:56.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.238 --rc genhtml_branch_coverage=1 00:27:56.238 --rc genhtml_function_coverage=1 00:27:56.238 --rc genhtml_legend=1 00:27:56.238 --rc geninfo_all_blocks=1 00:27:56.238 --rc geninfo_unexecuted_blocks=1 00:27:56.238 00:27:56.238 ' 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:56.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.238 --rc genhtml_branch_coverage=1 00:27:56.238 --rc genhtml_function_coverage=1 00:27:56.238 --rc genhtml_legend=1 00:27:56.238 --rc geninfo_all_blocks=1 00:27:56.238 --rc geninfo_unexecuted_blocks=1 00:27:56.238 00:27:56.238 ' 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:56.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.238 --rc genhtml_branch_coverage=1 00:27:56.238 --rc genhtml_function_coverage=1 00:27:56.238 --rc genhtml_legend=1 00:27:56.238 --rc geninfo_all_blocks=1 00:27:56.238 --rc geninfo_unexecuted_blocks=1 00:27:56.238 00:27:56.238 ' 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:56.238 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:56.239 13:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:02.813 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:02.813 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:02.813 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:02.813 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:02.813 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:02.813 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:02.813 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:02.813 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:02.813 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:02.813 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:02.813 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:02.813 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:02.813 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:02.813 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:02.813 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:02.813 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:02.813 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:02.813 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:02.813 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:02.813 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:02.813 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:02.814 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:02.814 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:02.814 Found net devices under 0000:86:00.0: cvl_0_0 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:02.814 Found net devices under 0000:86:00.1: cvl_0_1 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:02.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:02.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:28:02.814 00:28:02.814 --- 10.0.0.2 ping statistics --- 00:28:02.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.814 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:02.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:02.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:28:02.814 00:28:02.814 --- 10.0.0.1 ping statistics --- 00:28:02.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.814 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3009604 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3009604 00:28:02.814 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3009604 ']' 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:02.815 [2024-11-19 13:20:05.515310] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:02.815 [2024-11-19 13:20:05.516306] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:28:02.815 [2024-11-19 13:20:05.516344] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:02.815 [2024-11-19 13:20:05.597832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:02.815 [2024-11-19 13:20:05.638773] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:02.815 [2024-11-19 13:20:05.638810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:02.815 [2024-11-19 13:20:05.638817] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:02.815 [2024-11-19 13:20:05.638823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:02.815 [2024-11-19 13:20:05.638828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:02.815 [2024-11-19 13:20:05.640236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:02.815 [2024-11-19 13:20:05.640340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.815 [2024-11-19 13:20:05.640341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:02.815 [2024-11-19 13:20:05.708391] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:02.815 [2024-11-19 13:20:05.709207] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:02.815 [2024-11-19 13:20:05.709533] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:02.815 [2024-11-19 13:20:05.709631] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:02.815 [2024-11-19 13:20:05.785228] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:02.815 Malloc0 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:02.815 Delay0 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:02.815 [2024-11-19 13:20:05.881186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.815 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:02.815 [2024-11-19 13:20:05.970582] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:05.353 Initializing NVMe Controllers 00:28:05.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:05.353 controller IO queue size 128 less than required 00:28:05.353 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:05.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:05.353 Initialization complete. Launching workers. 00:28:05.353 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37039 00:28:05.353 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37100, failed to submit 66 00:28:05.353 success 37039, unsuccessful 61, failed 0 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:05.353 rmmod nvme_tcp 00:28:05.353 rmmod nvme_fabrics 00:28:05.353 rmmod nvme_keyring 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3009604 ']' 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3009604 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3009604 ']' 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3009604 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3009604 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3009604' 00:28:05.353 killing process with pid 3009604 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3009604 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3009604 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:05.353 13:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.378 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:07.378 00:28:07.378 real 0m11.141s 00:28:07.378 user 0m10.459s 00:28:07.378 sys 0m5.707s 00:28:07.378 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:07.378 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:07.378 ************************************ 00:28:07.378 END TEST nvmf_abort 00:28:07.378 ************************************ 00:28:07.378 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:07.378 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:07.378 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:07.378 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:07.378 ************************************ 00:28:07.378 START TEST nvmf_ns_hotplug_stress 00:28:07.378 ************************************ 00:28:07.378 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:07.378 * Looking for test storage... 00:28:07.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:07.378 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:07.378 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:28:07.378 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:07.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.637 --rc genhtml_branch_coverage=1 00:28:07.637 --rc genhtml_function_coverage=1 00:28:07.637 --rc genhtml_legend=1 00:28:07.637 --rc geninfo_all_blocks=1 00:28:07.637 --rc geninfo_unexecuted_blocks=1 00:28:07.637 00:28:07.637 ' 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:07.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.637 --rc genhtml_branch_coverage=1 00:28:07.637 --rc genhtml_function_coverage=1 00:28:07.637 --rc genhtml_legend=1 00:28:07.637 --rc geninfo_all_blocks=1 00:28:07.637 --rc geninfo_unexecuted_blocks=1 00:28:07.637 00:28:07.637 ' 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:07.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.637 --rc genhtml_branch_coverage=1 00:28:07.637 --rc genhtml_function_coverage=1 00:28:07.637 --rc genhtml_legend=1 00:28:07.637 --rc geninfo_all_blocks=1 00:28:07.637 --rc geninfo_unexecuted_blocks=1 00:28:07.637 00:28:07.637 ' 00:28:07.637 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:07.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.637 --rc genhtml_branch_coverage=1 00:28:07.638 --rc genhtml_function_coverage=1 00:28:07.638 --rc genhtml_legend=1 00:28:07.638 --rc geninfo_all_blocks=1 00:28:07.638 --rc geninfo_unexecuted_blocks=1 00:28:07.638 00:28:07.638 ' 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:07.638 13:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:14.208 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:14.208 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:14.208 Found net devices under 0000:86:00.0: cvl_0_0 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:14.208 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:14.209 Found net devices under 0000:86:00.1: cvl_0_1 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:14.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:14.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:28:14.209 00:28:14.209 --- 10.0.0.2 ping statistics --- 00:28:14.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.209 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:14.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:14.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:28:14.209 00:28:14.209 --- 10.0.0.1 ping statistics --- 00:28:14.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.209 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3013611 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3013611 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3013611 ']' 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:14.209 [2024-11-19 13:20:16.717227] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:14.209 [2024-11-19 13:20:16.718264] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:28:14.209 [2024-11-19 13:20:16.718308] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.209 [2024-11-19 13:20:16.802647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:14.209 [2024-11-19 13:20:16.844695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.209 [2024-11-19 13:20:16.844732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.209 [2024-11-19 13:20:16.844740] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:14.209 [2024-11-19 13:20:16.844746] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:14.209 [2024-11-19 13:20:16.844751] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.209 [2024-11-19 13:20:16.846207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:14.209 [2024-11-19 13:20:16.846313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.209 [2024-11-19 13:20:16.846313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:14.209 [2024-11-19 13:20:16.914322] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:14.209 [2024-11-19 13:20:16.915103] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:14.209 [2024-11-19 13:20:16.915290] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:14.209 [2024-11-19 13:20:16.915452] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:14.209 13:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:14.209 [2024-11-19 13:20:17.151077] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:14.209 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:14.209 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:14.210 [2024-11-19 13:20:17.543456] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:14.210 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:14.468 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:14.726 Malloc0 00:28:14.727 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:14.986 Delay0 00:28:14.986 13:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:14.986 13:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:28:15.244 NULL1 00:28:15.244 13:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:15.504 13:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3013879 00:28:15.504 13:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:28:15.504 13:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:15.504 13:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.881 Read completed with error (sct=0, sc=11) 00:28:16.881 13:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:16.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.881 13:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:16.881 13:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:17.141 true 00:28:17.141 13:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:17.141 13:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.079 13:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:18.079 13:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:28:18.079 13:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:28:18.339 true 00:28:18.339 13:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:18.339 13:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.598 13:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:18.598 13:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:18.598 13:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:18.857 true 00:28:18.857 13:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:18.857 13:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.054 13:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:20.054 13:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:20.054 13:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:20.329 true 00:28:20.329 13:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:20.329 13:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.590 13:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:20.849 13:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:20.849 13:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:20.849 true 00:28:20.849 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:20.849 13:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:22.227 13:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:22.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:22.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:22.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:22.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:22.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:22.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:22.227 13:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:22.227 13:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:22.486 true 00:28:22.486 13:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:22.486 13:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:23.424 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:23.424 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:23.424 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:23.683 true 00:28:23.683 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:23.683 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:23.943 13:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:24.202 13:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:24.202 13:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:24.202 true 00:28:24.202 13:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:24.202 13:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:25.582 13:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:25.582 13:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:25.582 13:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:25.582 true 00:28:25.582 13:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:25.582 13:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:25.841 13:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:26.099 13:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:26.099 13:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:26.357 true 00:28:26.357 13:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:26.357 13:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:27.294 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.294 13:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:27.294 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.554 13:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:27.554 13:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:27.813 true 00:28:27.813 13:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:27.813 13:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:28.750 13:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:28.750 13:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:28.750 13:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:29.008 true 00:28:29.008 13:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:29.008 13:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:29.267 13:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:29.526 13:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:29.526 13:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:29.526 true 00:28:29.792 13:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:29.792 13:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:30.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.736 13:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:30.736 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:30.736 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:30.996 true 00:28:30.996 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:30.996 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.255 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:31.515 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:31.515 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:31.515 true 00:28:31.774 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:31.774 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:32.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:32.727 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:32.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:32.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:32.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:32.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:32.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:32.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:32.987 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:32.987 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:33.246 true 00:28:33.246 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:33.246 13:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:34.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:34.184 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:34.184 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:34.184 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:34.442 true 00:28:34.443 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:34.443 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:34.702 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:34.961 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:34.961 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:34.961 true 00:28:34.961 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:34.961 13:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:36.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.339 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:36.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.339 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:36.339 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:36.598 true 00:28:36.598 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:36.598 13:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:37.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:37.535 13:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:37.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:37.535 13:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:37.535 13:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:37.795 true 00:28:37.795 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:37.795 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:38.054 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:38.314 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:38.314 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:38.314 true 00:28:38.314 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:38.314 13:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:39.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:39.690 13:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:39.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:39.690 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:39.690 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:39.949 true 00:28:39.949 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:39.949 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:40.207 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:40.466 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:40.466 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:40.466 true 00:28:40.466 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:40.466 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:41.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:41.844 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:41.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:41.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:41.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:41.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:41.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:41.844 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:41.844 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:42.103 true 00:28:42.103 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:42.103 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:43.040 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:43.040 13:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:43.040 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:43.040 13:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:43.040 13:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:43.299 true 00:28:43.299 13:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:43.299 13:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:43.558 13:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:43.816 13:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:43.816 13:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:43.816 true 00:28:43.816 13:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:43.816 13:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:45.193 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:45.193 13:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:45.193 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:45.193 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:45.193 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:45.193 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:45.193 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:45.193 13:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:45.193 13:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:45.452 true 00:28:45.452 13:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:45.452 13:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:46.386 13:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:46.386 Initializing NVMe Controllers 00:28:46.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:46.386 Controller IO queue size 128, less than required. 00:28:46.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:46.386 Controller IO queue size 128, less than required. 00:28:46.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:46.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:46.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:46.386 Initialization complete. Launching workers. 00:28:46.386 ======================================================== 00:28:46.386 Latency(us) 00:28:46.386 Device Information : IOPS MiB/s Average min max 00:28:46.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1900.23 0.93 46176.79 2849.29 1013384.60 00:28:46.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17336.37 8.47 7382.94 1328.42 407048.49 00:28:46.386 ======================================================== 00:28:46.386 Total : 19236.60 9.39 11215.08 1328.42 1013384.60 00:28:46.386 00:28:46.386 13:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:46.386 13:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:46.646 true 00:28:46.646 13:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3013879 00:28:46.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3013879) - No such process 00:28:46.646 13:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3013879 00:28:46.646 13:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:46.906 13:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:46.906 13:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:46.906 13:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:46.906 13:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:46.906 13:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:46.906 13:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:47.164 null0 00:28:47.164 13:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:47.164 13:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:47.164 13:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:47.422 null1 00:28:47.422 13:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:47.422 13:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:47.422 13:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:47.681 null2 00:28:47.681 13:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:47.681 13:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:47.681 13:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:47.681 null3 00:28:47.681 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:47.681 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:47.681 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:47.939 null4 00:28:47.939 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:47.939 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:47.939 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:48.197 null5 00:28:48.197 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:48.197 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:48.197 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:48.197 null6 00:28:48.197 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:48.197 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:48.197 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:48.456 null7 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:48.456 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3019224 3019225 3019227 3019229 3019231 3019233 3019235 3019237 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.457 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:48.715 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:48.715 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:48.715 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:48.715 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:48.715 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:48.715 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:48.715 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:48.715 13:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:48.974 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.974 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.974 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:48.974 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.974 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.974 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:48.974 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.974 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.974 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:48.974 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.975 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.975 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:48.975 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.975 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.975 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:48.975 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.975 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.975 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:48.975 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.975 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.975 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:48.975 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:48.975 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:48.975 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:49.234 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:49.234 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:49.234 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:49.234 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:49.234 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:49.234 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:49.234 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:49.234 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:49.234 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.234 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.234 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:49.234 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.234 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.234 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:49.234 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.234 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.234 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:49.234 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.234 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.234 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:49.234 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.234 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.234 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:49.493 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.493 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.493 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:49.493 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.493 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.493 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:49.493 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.493 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.493 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:49.493 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:49.493 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:49.493 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:49.493 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:49.493 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:49.493 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:49.493 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:49.493 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:49.752 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.752 13:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.752 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:49.753 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.753 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.753 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:49.753 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.753 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.753 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:49.753 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.753 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.753 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:49.753 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.753 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.753 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:49.753 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.753 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.753 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:49.753 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.753 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.753 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:49.753 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:49.753 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:49.753 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:50.011 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.011 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:50.012 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:50.012 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:50.012 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:50.012 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:50.012 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:50.012 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:50.270 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.270 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.270 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:50.270 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.270 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.270 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:50.270 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.270 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.270 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:50.270 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.270 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.270 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:50.270 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.270 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.270 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.270 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.270 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.270 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:50.270 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:50.270 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.270 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.270 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.271 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:50.271 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:50.271 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:50.271 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:50.271 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:50.271 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:50.529 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:50.530 13:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:50.788 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:50.788 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:50.788 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:50.788 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:50.788 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:50.788 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.789 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:50.789 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:51.047 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.047 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.047 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:51.047 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.047 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.047 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:51.047 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.047 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.047 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:51.047 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.047 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.047 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:51.047 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.047 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.047 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:51.047 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.047 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.047 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:51.047 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.047 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.047 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:51.047 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.047 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.047 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:51.307 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:51.307 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:51.307 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:51.307 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:51.307 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:51.307 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:51.307 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:51.307 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:51.307 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.307 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.307 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.307 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:51.307 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.307 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:51.307 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.307 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.566 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:51.566 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.566 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.566 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:51.566 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.566 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.566 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:51.566 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.566 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.566 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:51.566 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.566 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.566 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:51.566 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.566 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.566 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:51.566 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:51.566 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:51.566 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:51.566 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:51.566 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:51.566 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:51.566 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:51.566 13:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:51.824 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.824 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.824 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:51.824 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.824 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.824 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.824 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:51.824 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.824 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:51.824 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.824 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.824 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.824 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:51.824 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.824 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:51.824 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.824 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.824 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:51.824 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.824 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.824 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:51.824 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:51.824 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:51.825 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:52.083 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:52.083 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:52.083 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:52.083 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:52.083 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:52.083 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:52.083 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:52.083 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:52.342 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:52.342 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:52.342 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:52.342 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:52.342 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:52.342 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:52.342 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:52.342 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:52.342 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:52.342 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:52.342 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:52.342 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:52.342 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:52.342 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:52.342 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:52.342 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:52.342 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:52.342 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:52.342 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:52.342 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:52.342 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:52.342 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:52.342 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:52.342 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:52.600 13:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:52.858 rmmod nvme_tcp 00:28:52.858 rmmod nvme_fabrics 00:28:52.858 rmmod nvme_keyring 00:28:52.858 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:52.858 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:52.858 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:52.858 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3013611 ']' 00:28:52.858 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3013611 00:28:52.858 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3013611 ']' 00:28:52.858 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3013611 00:28:52.858 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:52.858 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:52.858 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3013611 00:28:52.858 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:52.858 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:52.858 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3013611' 00:28:52.858 killing process with pid 3013611 00:28:52.858 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3013611 00:28:52.858 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3013611 00:28:53.117 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:53.117 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:53.117 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:53.117 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:53.117 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:53.117 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:53.117 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:53.117 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:53.117 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:53.117 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.117 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.117 13:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.033 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:55.033 00:28:55.033 real 0m47.750s 00:28:55.033 user 2m58.496s 00:28:55.033 sys 0m20.039s 00:28:55.033 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:55.033 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:55.033 ************************************ 00:28:55.033 END TEST nvmf_ns_hotplug_stress 00:28:55.033 ************************************ 00:28:55.033 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:55.033 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:55.033 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:55.033 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:55.294 ************************************ 00:28:55.294 START TEST nvmf_delete_subsystem 00:28:55.294 ************************************ 00:28:55.294 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:55.294 * Looking for test storage... 00:28:55.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:55.294 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:55.294 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:28:55.294 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:55.294 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:55.294 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:55.294 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:55.294 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:55.294 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:55.294 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:55.294 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:55.294 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:55.294 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:55.294 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:55.294 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:55.294 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:55.294 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:55.294 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:55.294 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:55.294 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:55.294 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:55.294 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:55.294 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:55.294 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:55.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.295 --rc genhtml_branch_coverage=1 00:28:55.295 --rc genhtml_function_coverage=1 00:28:55.295 --rc genhtml_legend=1 00:28:55.295 --rc geninfo_all_blocks=1 00:28:55.295 --rc geninfo_unexecuted_blocks=1 00:28:55.295 00:28:55.295 ' 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:55.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.295 --rc genhtml_branch_coverage=1 00:28:55.295 --rc genhtml_function_coverage=1 00:28:55.295 --rc genhtml_legend=1 00:28:55.295 --rc geninfo_all_blocks=1 00:28:55.295 --rc geninfo_unexecuted_blocks=1 00:28:55.295 00:28:55.295 ' 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:55.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.295 --rc genhtml_branch_coverage=1 00:28:55.295 --rc genhtml_function_coverage=1 00:28:55.295 --rc genhtml_legend=1 00:28:55.295 --rc geninfo_all_blocks=1 00:28:55.295 --rc geninfo_unexecuted_blocks=1 00:28:55.295 00:28:55.295 ' 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:55.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.295 --rc genhtml_branch_coverage=1 00:28:55.295 --rc genhtml_function_coverage=1 00:28:55.295 --rc genhtml_legend=1 00:28:55.295 --rc geninfo_all_blocks=1 00:28:55.295 --rc geninfo_unexecuted_blocks=1 00:28:55.295 00:28:55.295 ' 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:55.295 13:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:02.075 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:02.075 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:02.076 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:02.076 Found net devices under 0000:86:00.0: cvl_0_0 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:02.076 Found net devices under 0000:86:00.1: cvl_0_1 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:02.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:02.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:29:02.076 00:29:02.076 --- 10.0.0.2 ping statistics --- 00:29:02.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.076 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:02.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:02.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:29:02.076 00:29:02.076 --- 10.0.0.1 ping statistics --- 00:29:02.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.076 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3023737 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3023737 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3023737 ']' 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:02.076 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:02.076 [2024-11-19 13:21:04.566900] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:02.076 [2024-11-19 13:21:04.567893] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:29:02.076 [2024-11-19 13:21:04.567932] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:02.076 [2024-11-19 13:21:04.648005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:02.077 [2024-11-19 13:21:04.689141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:02.077 [2024-11-19 13:21:04.689177] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:02.077 [2024-11-19 13:21:04.689184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:02.077 [2024-11-19 13:21:04.689190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:02.077 [2024-11-19 13:21:04.689195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:02.077 [2024-11-19 13:21:04.690415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.077 [2024-11-19 13:21:04.690417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.077 [2024-11-19 13:21:04.758133] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:02.077 [2024-11-19 13:21:04.758714] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:02.077 [2024-11-19 13:21:04.758933] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:02.077 [2024-11-19 13:21:04.827294] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:02.077 [2024-11-19 13:21:04.855544] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:02.077 NULL1 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:02.077 Delay0 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3023762 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:02.077 13:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:02.077 [2024-11-19 13:21:04.968735] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:03.981 13:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:03.981 13:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.981 13:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 starting I/O failed: -6 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 starting I/O failed: -6 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 starting I/O failed: -6 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 starting I/O failed: -6 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 starting I/O failed: -6 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 starting I/O failed: -6 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 starting I/O failed: -6 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 starting I/O failed: -6 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 starting I/O failed: -6 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 starting I/O failed: -6 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 starting I/O failed: -6 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 starting I/O failed: -6 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 starting I/O failed: -6 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 starting I/O failed: -6 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 starting I/O failed: -6 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 starting I/O failed: -6 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 starting I/O failed: -6 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 starting I/O failed: -6 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 starting I/O failed: -6 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 starting I/O failed: -6 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 starting I/O failed: -6 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 starting I/O failed: -6 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 [2024-11-19 13:21:07.047571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd95000d680 is same with the state(6) to be set 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Read completed with error (sct=0, sc=8) 00:29:03.981 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Write completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 Read completed with error (sct=0, sc=8) 00:29:03.982 [2024-11-19 13:21:07.048039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd950000c40 is same with the state(6) to be set 00:29:04.920 [2024-11-19 13:21:08.022685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d089a0 is same with the state(6) to be set 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 [2024-11-19 13:21:08.050310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd95000d350 is same with the state(6) to be set 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 [2024-11-19 13:21:08.050755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d07860 is same with the state(6) to be set 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 [2024-11-19 13:21:08.050911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d072c0 is same with the state(6) to be set 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Write completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 Read completed with error (sct=0, sc=8) 00:29:04.920 [2024-11-19 13:21:08.051462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d074a0 is same with the state(6) to be set 00:29:04.920 Initializing NVMe Controllers 00:29:04.920 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:04.920 Controller IO queue size 128, less than required. 00:29:04.920 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:04.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:04.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:04.920 Initialization complete. Launching workers. 00:29:04.920 ======================================================== 00:29:04.920 Latency(us) 00:29:04.920 Device Information : IOPS MiB/s Average min max 00:29:04.920 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 186.81 0.09 951317.35 439.23 1011069.20 00:29:04.920 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.02 0.08 882782.55 256.79 1043285.67 00:29:04.920 ======================================================== 00:29:04.920 Total : 340.82 0.17 920346.81 256.79 1043285.67 00:29:04.920 00:29:04.920 [2024-11-19 13:21:08.052048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d089a0 (9): Bad file descriptor 00:29:04.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:04.920 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.920 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:04.920 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3023762 00:29:04.920 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3023762 00:29:05.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3023762) - No such process 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3023762 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3023762 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3023762 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:05.489 [2024-11-19 13:21:08.583389] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3024839 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3024839 00:29:05.489 13:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:05.489 [2024-11-19 13:21:08.667897] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:05.749 13:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:05.749 13:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3024839 00:29:05.749 13:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:06.314 13:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:06.314 13:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3024839 00:29:06.314 13:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:06.881 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:06.881 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3024839 00:29:06.881 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:07.448 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:07.448 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3024839 00:29:07.448 13:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:08.018 13:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:08.018 13:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3024839 00:29:08.018 13:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:08.276 13:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:08.276 13:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3024839 00:29:08.276 13:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:08.535 Initializing NVMe Controllers 00:29:08.535 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:08.535 Controller IO queue size 128, less than required. 00:29:08.535 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:08.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:08.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:08.535 Initialization complete. Launching workers. 00:29:08.535 ======================================================== 00:29:08.535 Latency(us) 00:29:08.535 Device Information : IOPS MiB/s Average min max 00:29:08.535 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002326.42 1000145.39 1006838.23 00:29:08.535 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004792.64 1000386.62 1041088.93 00:29:08.535 ======================================================== 00:29:08.535 Total : 256.00 0.12 1003559.53 1000145.39 1041088.93 00:29:08.535 00:29:08.794 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:08.794 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3024839 00:29:08.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3024839) - No such process 00:29:08.794 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3024839 00:29:08.794 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:08.794 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:08.794 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:08.794 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:08.794 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:08.794 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:08.794 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:08.794 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:08.794 rmmod nvme_tcp 00:29:08.794 rmmod nvme_fabrics 00:29:08.794 rmmod nvme_keyring 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3023737 ']' 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3023737 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3023737 ']' 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3023737 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3023737 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3023737' 00:29:09.055 killing process with pid 3023737 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3023737 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3023737 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.055 13:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:11.593 00:29:11.593 real 0m16.066s 00:29:11.593 user 0m25.847s 00:29:11.593 sys 0m6.233s 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:11.593 ************************************ 00:29:11.593 END TEST nvmf_delete_subsystem 00:29:11.593 ************************************ 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:11.593 ************************************ 00:29:11.593 START TEST nvmf_host_management 00:29:11.593 ************************************ 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:11.593 * Looking for test storage... 00:29:11.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:11.593 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:11.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.594 --rc genhtml_branch_coverage=1 00:29:11.594 --rc genhtml_function_coverage=1 00:29:11.594 --rc genhtml_legend=1 00:29:11.594 --rc geninfo_all_blocks=1 00:29:11.594 --rc geninfo_unexecuted_blocks=1 00:29:11.594 00:29:11.594 ' 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:11.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.594 --rc genhtml_branch_coverage=1 00:29:11.594 --rc genhtml_function_coverage=1 00:29:11.594 --rc genhtml_legend=1 00:29:11.594 --rc geninfo_all_blocks=1 00:29:11.594 --rc geninfo_unexecuted_blocks=1 00:29:11.594 00:29:11.594 ' 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:11.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.594 --rc genhtml_branch_coverage=1 00:29:11.594 --rc genhtml_function_coverage=1 00:29:11.594 --rc genhtml_legend=1 00:29:11.594 --rc geninfo_all_blocks=1 00:29:11.594 --rc geninfo_unexecuted_blocks=1 00:29:11.594 00:29:11.594 ' 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:11.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.594 --rc genhtml_branch_coverage=1 00:29:11.594 --rc genhtml_function_coverage=1 00:29:11.594 --rc genhtml_legend=1 00:29:11.594 --rc geninfo_all_blocks=1 00:29:11.594 --rc geninfo_unexecuted_blocks=1 00:29:11.594 00:29:11.594 ' 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:11.594 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:11.595 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:11.595 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.595 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:11.595 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:11.595 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:11.595 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.595 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.595 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.595 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:11.595 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:11.595 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:11.595 13:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:18.167 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:18.167 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:18.168 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:18.168 Found net devices under 0000:86:00.0: cvl_0_0 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:18.168 Found net devices under 0000:86:00.1: cvl_0_1 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:18.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:18.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:29:18.168 00:29:18.168 --- 10.0.0.2 ping statistics --- 00:29:18.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.168 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:18.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:18.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:29:18.168 00:29:18.168 --- 10.0.0.1 ping statistics --- 00:29:18.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.168 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3028832 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3028832 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3028832 ']' 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.168 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:18.169 13:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:18.169 [2024-11-19 13:21:20.725460] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:18.169 [2024-11-19 13:21:20.726388] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:29:18.169 [2024-11-19 13:21:20.726421] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:18.169 [2024-11-19 13:21:20.807301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:18.169 [2024-11-19 13:21:20.848814] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:18.169 [2024-11-19 13:21:20.848850] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:18.169 [2024-11-19 13:21:20.848857] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:18.169 [2024-11-19 13:21:20.848863] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:18.169 [2024-11-19 13:21:20.848869] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:18.169 [2024-11-19 13:21:20.850343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:18.169 [2024-11-19 13:21:20.850444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:18.169 [2024-11-19 13:21:20.850553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.169 [2024-11-19 13:21:20.850554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:18.169 [2024-11-19 13:21:20.917710] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:18.169 [2024-11-19 13:21:20.918795] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:18.169 [2024-11-19 13:21:20.918992] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:18.169 [2024-11-19 13:21:20.919272] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:18.169 [2024-11-19 13:21:20.919307] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:18.428 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:18.428 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:18.428 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:18.429 [2024-11-19 13:21:21.607310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:18.429 Malloc0 00:29:18.429 [2024-11-19 13:21:21.691469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3029095 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3029095 /var/tmp/bdevperf.sock 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3029095 ']' 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:18.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:18.429 { 00:29:18.429 "params": { 00:29:18.429 "name": "Nvme$subsystem", 00:29:18.429 "trtype": "$TEST_TRANSPORT", 00:29:18.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:18.429 "adrfam": "ipv4", 00:29:18.429 "trsvcid": "$NVMF_PORT", 00:29:18.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:18.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:18.429 "hdgst": ${hdgst:-false}, 00:29:18.429 "ddgst": ${ddgst:-false} 00:29:18.429 }, 00:29:18.429 "method": "bdev_nvme_attach_controller" 00:29:18.429 } 00:29:18.429 EOF 00:29:18.429 )") 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:18.429 13:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:18.429 "params": { 00:29:18.429 "name": "Nvme0", 00:29:18.429 "trtype": "tcp", 00:29:18.429 "traddr": "10.0.0.2", 00:29:18.429 "adrfam": "ipv4", 00:29:18.429 "trsvcid": "4420", 00:29:18.429 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:18.429 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:18.429 "hdgst": false, 00:29:18.429 "ddgst": false 00:29:18.429 }, 00:29:18.429 "method": "bdev_nvme_attach_controller" 00:29:18.429 }' 00:29:18.429 [2024-11-19 13:21:21.785664] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:29:18.429 [2024-11-19 13:21:21.785712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3029095 ] 00:29:18.689 [2024-11-19 13:21:21.862631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.689 [2024-11-19 13:21:21.904028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.689 Running I/O for 10 seconds... 00:29:18.948 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:18.948 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:18.948 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:18.948 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.948 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:18.948 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.948 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:18.948 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:29:18.948 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:18.948 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:29:18.948 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:29:18.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:29:18.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:29:18.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:18.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:18.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:18.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:18.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:29:18.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:29:18.949 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:29:19.209 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:29:19.209 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:19.209 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:19.209 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:19.209 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.209 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:19.209 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.209 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=678 00:29:19.209 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 678 -ge 100 ']' 00:29:19.209 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:29:19.209 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:29:19.209 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:29:19.209 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:19.209 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.209 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:19.209 [2024-11-19 13:21:22.455282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.209 [2024-11-19 13:21:22.455323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.209 [2024-11-19 13:21:22.455338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.209 [2024-11-19 13:21:22.455346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.209 [2024-11-19 13:21:22.455359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.209 [2024-11-19 13:21:22.455368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.209 [2024-11-19 13:21:22.455377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.209 [2024-11-19 13:21:22.455383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.209 [2024-11-19 13:21:22.455392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.209 [2024-11-19 13:21:22.455398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 13:21:22.455992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 13:21:22.455999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.211 [2024-11-19 13:21:22.456007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.211 [2024-11-19 13:21:22.456014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.211 [2024-11-19 13:21:22.456022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.211 [2024-11-19 13:21:22.456029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.211 [2024-11-19 13:21:22.456037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.211 [2024-11-19 13:21:22.456043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.211 [2024-11-19 13:21:22.456052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.211 [2024-11-19 13:21:22.456059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.211 [2024-11-19 13:21:22.456067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.211 [2024-11-19 13:21:22.456074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.211 [2024-11-19 13:21:22.456082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.211 [2024-11-19 13:21:22.456089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.211 [2024-11-19 13:21:22.456097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.211 [2024-11-19 13:21:22.456104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.211 [2024-11-19 13:21:22.456113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.211 [2024-11-19 13:21:22.456122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.211 [2024-11-19 13:21:22.456130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.211 [2024-11-19 13:21:22.456136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.211 [2024-11-19 13:21:22.456145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.211 [2024-11-19 13:21:22.456151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.211 [2024-11-19 13:21:22.456159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.211 [2024-11-19 13:21:22.456167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.211 [2024-11-19 13:21:22.456175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.211 [2024-11-19 13:21:22.456182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.211 [2024-11-19 13:21:22.456191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.211 [2024-11-19 13:21:22.456197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.211 [2024-11-19 13:21:22.456206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.211 [2024-11-19 13:21:22.456212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.211 [2024-11-19 13:21:22.456221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.211 [2024-11-19 13:21:22.456229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.211 [2024-11-19 13:21:22.456237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.211 [2024-11-19 13:21:22.456243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.211 [2024-11-19 13:21:22.456251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.211 [2024-11-19 13:21:22.456258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.211 [2024-11-19 13:21:22.456266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.211 [2024-11-19 13:21:22.456273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.211 [2024-11-19 13:21:22.456282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.211 [2024-11-19 13:21:22.456288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.211 [2024-11-19 13:21:22.456296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.211 [2024-11-19 13:21:22.456303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.211 [2024-11-19 13:21:22.457275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:19.211 task offset: 100096 on job bdev=Nvme0n1 fails 00:29:19.211 00:29:19.211 Latency(us) 00:29:19.211 [2024-11-19T12:21:22.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.211 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.211 Job: Nvme0n1 ended in about 0.40 seconds with error 00:29:19.211 Verification LBA range: start 0x0 length 0x400 00:29:19.211 Nvme0n1 : 0.40 1936.89 121.06 161.41 0.00 29655.25 1538.67 27810.06 00:29:19.211 [2024-11-19T12:21:22.588Z] =================================================================================================================== 00:29:19.211 [2024-11-19T12:21:22.588Z] Total : 1936.89 121.06 161.41 0.00 29655.25 1538.67 27810.06 00:29:19.211 [2024-11-19 13:21:22.459681] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:19.211 [2024-11-19 13:21:22.459704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bad500 (9): Bad file descriptor 00:29:19.211 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.211 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:19.211 [2024-11-19 13:21:22.460699] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:29:19.211 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.211 [2024-11-19 13:21:22.460776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:19.211 [2024-11-19 13:21:22.460800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.211 [2024-11-19 13:21:22.460811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:29:19.211 [2024-11-19 13:21:22.460819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:29:19.211 [2024-11-19 13:21:22.460826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.211 [2024-11-19 13:21:22.460833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bad500 00:29:19.211 [2024-11-19 13:21:22.460852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bad500 (9): Bad file descriptor 00:29:19.211 [2024-11-19 13:21:22.460864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:19.211 [2024-11-19 13:21:22.460871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:19.211 [2024-11-19 13:21:22.460879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:19.211 [2024-11-19 13:21:22.460889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:19.211 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:19.211 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.211 13:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:29:20.145 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3029095 00:29:20.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3029095) - No such process 00:29:20.145 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:20.145 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:20.145 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:20.145 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:20.145 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:20.145 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:20.145 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.145 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.145 { 00:29:20.145 "params": { 00:29:20.145 "name": "Nvme$subsystem", 00:29:20.145 "trtype": "$TEST_TRANSPORT", 00:29:20.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.145 "adrfam": "ipv4", 00:29:20.145 "trsvcid": "$NVMF_PORT", 00:29:20.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.145 "hdgst": ${hdgst:-false}, 00:29:20.145 "ddgst": ${ddgst:-false} 00:29:20.145 }, 00:29:20.145 "method": "bdev_nvme_attach_controller" 00:29:20.145 } 00:29:20.145 EOF 00:29:20.145 )") 00:29:20.145 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:20.145 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:20.145 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:20.145 13:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:20.145 "params": { 00:29:20.145 "name": "Nvme0", 00:29:20.145 "trtype": "tcp", 00:29:20.145 "traddr": "10.0.0.2", 00:29:20.145 "adrfam": "ipv4", 00:29:20.145 "trsvcid": "4420", 00:29:20.145 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:20.145 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:20.145 "hdgst": false, 00:29:20.145 "ddgst": false 00:29:20.145 }, 00:29:20.145 "method": "bdev_nvme_attach_controller" 00:29:20.145 }' 00:29:20.405 [2024-11-19 13:21:23.527154] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:29:20.405 [2024-11-19 13:21:23.527204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3029344 ] 00:29:20.405 [2024-11-19 13:21:23.601879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.405 [2024-11-19 13:21:23.642476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.664 Running I/O for 1 seconds... 00:29:21.600 1984.00 IOPS, 124.00 MiB/s 00:29:21.600 Latency(us) 00:29:21.600 [2024-11-19T12:21:24.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.600 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:21.600 Verification LBA range: start 0x0 length 0x400 00:29:21.600 Nvme0n1 : 1.01 2025.07 126.57 0.00 0.00 31100.40 4786.98 27012.23 00:29:21.600 [2024-11-19T12:21:24.977Z] =================================================================================================================== 00:29:21.600 [2024-11-19T12:21:24.977Z] Total : 2025.07 126.57 0.00 0.00 31100.40 4786.98 27012.23 00:29:21.600 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:21.600 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:21.600 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:21.859 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:21.859 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:21.859 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:21.859 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:29:21.859 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:21.859 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:29:21.859 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:21.859 13:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:21.859 rmmod nvme_tcp 00:29:21.859 rmmod nvme_fabrics 00:29:21.859 rmmod nvme_keyring 00:29:21.859 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:21.859 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:29:21.859 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:29:21.859 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3028832 ']' 00:29:21.859 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3028832 00:29:21.859 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3028832 ']' 00:29:21.859 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3028832 00:29:21.859 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:29:21.859 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:21.859 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3028832 00:29:21.859 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:21.860 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:21.860 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3028832' 00:29:21.860 killing process with pid 3028832 00:29:21.860 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3028832 00:29:21.860 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3028832 00:29:22.119 [2024-11-19 13:21:25.254512] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:22.119 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:22.119 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:22.119 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:22.119 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:29:22.119 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:29:22.119 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:29:22.119 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:22.119 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:22.119 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:22.119 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.119 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:22.119 13:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.025 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:24.025 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:24.025 00:29:24.025 real 0m12.795s 00:29:24.025 user 0m17.368s 00:29:24.025 sys 0m6.263s 00:29:24.025 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:24.025 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:24.025 ************************************ 00:29:24.025 END TEST nvmf_host_management 00:29:24.025 ************************************ 00:29:24.025 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:24.025 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:24.025 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:24.025 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:24.286 ************************************ 00:29:24.286 START TEST nvmf_lvol 00:29:24.286 ************************************ 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:24.286 * Looking for test storage... 00:29:24.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:24.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.286 --rc genhtml_branch_coverage=1 00:29:24.286 --rc genhtml_function_coverage=1 00:29:24.286 --rc genhtml_legend=1 00:29:24.286 --rc geninfo_all_blocks=1 00:29:24.286 --rc geninfo_unexecuted_blocks=1 00:29:24.286 00:29:24.286 ' 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:24.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.286 --rc genhtml_branch_coverage=1 00:29:24.286 --rc genhtml_function_coverage=1 00:29:24.286 --rc genhtml_legend=1 00:29:24.286 --rc geninfo_all_blocks=1 00:29:24.286 --rc geninfo_unexecuted_blocks=1 00:29:24.286 00:29:24.286 ' 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:24.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.286 --rc genhtml_branch_coverage=1 00:29:24.286 --rc genhtml_function_coverage=1 00:29:24.286 --rc genhtml_legend=1 00:29:24.286 --rc geninfo_all_blocks=1 00:29:24.286 --rc geninfo_unexecuted_blocks=1 00:29:24.286 00:29:24.286 ' 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:24.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.286 --rc genhtml_branch_coverage=1 00:29:24.286 --rc genhtml_function_coverage=1 00:29:24.286 --rc genhtml_legend=1 00:29:24.286 --rc geninfo_all_blocks=1 00:29:24.286 --rc geninfo_unexecuted_blocks=1 00:29:24.286 00:29:24.286 ' 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.286 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:29:24.287 13:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:30.858 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:30.858 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:30.858 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:30.859 Found net devices under 0000:86:00.0: cvl_0_0 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:30.859 Found net devices under 0000:86:00.1: cvl_0_1 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:30.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:30.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:29:30.859 00:29:30.859 --- 10.0.0.2 ping statistics --- 00:29:30.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.859 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:30.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:30.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:29:30.859 00:29:30.859 --- 10.0.0.1 ping statistics --- 00:29:30.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.859 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3033099 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3033099 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3033099 ']' 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:30.859 13:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:30.859 [2024-11-19 13:21:33.594155] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:30.859 [2024-11-19 13:21:33.595115] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:29:30.859 [2024-11-19 13:21:33.595155] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.859 [2024-11-19 13:21:33.674478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:30.859 [2024-11-19 13:21:33.714820] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.859 [2024-11-19 13:21:33.714855] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.859 [2024-11-19 13:21:33.714862] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.859 [2024-11-19 13:21:33.714868] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.859 [2024-11-19 13:21:33.714873] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.859 [2024-11-19 13:21:33.716153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.859 [2024-11-19 13:21:33.716264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.859 [2024-11-19 13:21:33.716265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:30.859 [2024-11-19 13:21:33.784026] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:30.859 [2024-11-19 13:21:33.784786] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:30.860 [2024-11-19 13:21:33.784838] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:30.860 [2024-11-19 13:21:33.785064] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:31.119 13:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:31.119 13:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:29:31.119 13:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:31.119 13:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:31.119 13:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:31.119 13:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.119 13:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:31.378 [2024-11-19 13:21:34.641017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.378 13:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:31.637 13:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:31.637 13:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:31.896 13:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:31.896 13:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:32.155 13:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:32.155 13:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c1733e3a-b68d-45a5-a076-71cf7bb016e9 00:29:32.155 13:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c1733e3a-b68d-45a5-a076-71cf7bb016e9 lvol 20 00:29:32.414 13:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=76cfc049-bab3-4d9c-a791-684e1ef15c5d 00:29:32.414 13:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:32.674 13:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 76cfc049-bab3-4d9c-a791-684e1ef15c5d 00:29:32.933 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:32.933 [2024-11-19 13:21:36.260910] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:32.933 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:33.191 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3033596 00:29:33.191 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:33.191 13:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:34.127 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 76cfc049-bab3-4d9c-a791-684e1ef15c5d MY_SNAPSHOT 00:29:34.386 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d9a04294-9edb-449d-878d-eb14c6dee2cb 00:29:34.386 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 76cfc049-bab3-4d9c-a791-684e1ef15c5d 30 00:29:34.646 13:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d9a04294-9edb-449d-878d-eb14c6dee2cb MY_CLONE 00:29:34.904 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1bb44851-30e0-4443-9ec1-3e3b3223ff31 00:29:34.904 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1bb44851-30e0-4443-9ec1-3e3b3223ff31 00:29:35.471 13:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3033596 00:29:43.586 Initializing NVMe Controllers 00:29:43.586 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:43.586 Controller IO queue size 128, less than required. 00:29:43.586 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:43.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:43.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:43.586 Initialization complete. Launching workers. 00:29:43.586 ======================================================== 00:29:43.586 Latency(us) 00:29:43.586 Device Information : IOPS MiB/s Average min max 00:29:43.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12275.00 47.95 10427.05 2401.09 67863.22 00:29:43.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12130.40 47.38 10554.63 1389.15 64085.48 00:29:43.586 ======================================================== 00:29:43.586 Total : 24405.40 95.33 10490.46 1389.15 67863.22 00:29:43.586 00:29:43.586 13:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:43.845 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 76cfc049-bab3-4d9c-a791-684e1ef15c5d 00:29:44.103 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c1733e3a-b68d-45a5-a076-71cf7bb016e9 00:29:44.362 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:44.362 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:44.362 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:44.362 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:44.362 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:44.362 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:44.362 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:44.362 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:44.362 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:44.362 rmmod nvme_tcp 00:29:44.362 rmmod nvme_fabrics 00:29:44.362 rmmod nvme_keyring 00:29:44.362 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:44.362 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:44.362 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:44.362 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3033099 ']' 00:29:44.362 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3033099 00:29:44.362 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3033099 ']' 00:29:44.362 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3033099 00:29:44.362 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:44.362 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:44.362 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3033099 00:29:44.362 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:44.362 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:44.362 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3033099' 00:29:44.362 killing process with pid 3033099 00:29:44.362 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3033099 00:29:44.362 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3033099 00:29:44.621 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:44.621 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:44.621 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:44.621 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:44.622 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:44.622 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:44.622 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:44.622 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:44.622 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:44.622 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.622 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.622 13:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.536 13:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:46.536 00:29:46.536 real 0m22.472s 00:29:46.536 user 0m55.768s 00:29:46.536 sys 0m9.958s 00:29:46.536 13:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:46.536 13:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:46.536 ************************************ 00:29:46.536 END TEST nvmf_lvol 00:29:46.536 ************************************ 00:29:46.796 13:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:46.796 13:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:46.796 13:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:46.796 13:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:46.796 ************************************ 00:29:46.796 START TEST nvmf_lvs_grow 00:29:46.796 ************************************ 00:29:46.796 13:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:46.796 * Looking for test storage... 00:29:46.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:46.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.797 --rc genhtml_branch_coverage=1 00:29:46.797 --rc genhtml_function_coverage=1 00:29:46.797 --rc genhtml_legend=1 00:29:46.797 --rc geninfo_all_blocks=1 00:29:46.797 --rc geninfo_unexecuted_blocks=1 00:29:46.797 00:29:46.797 ' 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:46.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.797 --rc genhtml_branch_coverage=1 00:29:46.797 --rc genhtml_function_coverage=1 00:29:46.797 --rc genhtml_legend=1 00:29:46.797 --rc geninfo_all_blocks=1 00:29:46.797 --rc geninfo_unexecuted_blocks=1 00:29:46.797 00:29:46.797 ' 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:46.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.797 --rc genhtml_branch_coverage=1 00:29:46.797 --rc genhtml_function_coverage=1 00:29:46.797 --rc genhtml_legend=1 00:29:46.797 --rc geninfo_all_blocks=1 00:29:46.797 --rc geninfo_unexecuted_blocks=1 00:29:46.797 00:29:46.797 ' 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:46.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.797 --rc genhtml_branch_coverage=1 00:29:46.797 --rc genhtml_function_coverage=1 00:29:46.797 --rc genhtml_legend=1 00:29:46.797 --rc geninfo_all_blocks=1 00:29:46.797 --rc geninfo_unexecuted_blocks=1 00:29:46.797 00:29:46.797 ' 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:46.797 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:46.798 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:46.798 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:46.798 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:46.798 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:46.798 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:46.798 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:47.057 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:47.057 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:47.057 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:47.057 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:47.057 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:47.057 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:47.057 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:47.057 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:47.057 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.057 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.057 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.057 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:47.057 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:47.057 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:47.057 13:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:53.626 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:53.626 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:53.626 Found net devices under 0000:86:00.0: cvl_0_0 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:53.626 Found net devices under 0000:86:00.1: cvl_0_1 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:53.626 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:53.627 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:53.627 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:53.627 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:53.627 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:53.627 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:53.627 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:53.627 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:53.627 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:53.627 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:53.627 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:53.627 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:53.627 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:53.627 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:53.627 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:53.627 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:53.627 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:53.627 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:53.627 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:53.627 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:53.627 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:53.627 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:53.627 13:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:53.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:29:53.627 00:29:53.627 --- 10.0.0.2 ping statistics --- 00:29:53.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.627 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:53.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:29:53.627 00:29:53.627 --- 10.0.0.1 ping statistics --- 00:29:53.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.627 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3038947 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3038947 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3038947 ']' 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:53.627 [2024-11-19 13:21:56.152176] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:53.627 [2024-11-19 13:21:56.153111] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:29:53.627 [2024-11-19 13:21:56.153140] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.627 [2024-11-19 13:21:56.229812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.627 [2024-11-19 13:21:56.271774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.627 [2024-11-19 13:21:56.271811] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.627 [2024-11-19 13:21:56.271818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.627 [2024-11-19 13:21:56.271824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.627 [2024-11-19 13:21:56.271830] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.627 [2024-11-19 13:21:56.272394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.627 [2024-11-19 13:21:56.339896] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:53.627 [2024-11-19 13:21:56.340124] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:53.627 [2024-11-19 13:21:56.573044] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:53.627 ************************************ 00:29:53.627 START TEST lvs_grow_clean 00:29:53.627 ************************************ 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:53.627 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:53.628 13:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:53.887 13:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=4170c694-781c-4689-8bb1-a0b381ff008f 00:29:53.887 13:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:53.887 13:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4170c694-781c-4689-8bb1-a0b381ff008f 00:29:53.887 13:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:53.887 13:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:54.146 13:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4170c694-781c-4689-8bb1-a0b381ff008f lvol 150 00:29:54.146 13:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f41e9439-fcda-4dcd-b99e-b841f509a2d8 00:29:54.146 13:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:54.146 13:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:54.405 [2024-11-19 13:21:57.640822] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:54.405 [2024-11-19 13:21:57.640998] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:54.405 true 00:29:54.405 13:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4170c694-781c-4689-8bb1-a0b381ff008f 00:29:54.405 13:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:54.665 13:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:54.665 13:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:54.924 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f41e9439-fcda-4dcd-b99e-b841f509a2d8 00:29:54.924 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:55.183 [2024-11-19 13:21:58.425291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:55.183 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:55.442 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:55.442 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3039273 00:29:55.442 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:55.442 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3039273 /var/tmp/bdevperf.sock 00:29:55.442 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3039273 ']' 00:29:55.442 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:55.442 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:55.442 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:55.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:55.442 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:55.442 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:55.442 [2024-11-19 13:21:58.662389] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:29:55.442 [2024-11-19 13:21:58.662435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3039273 ] 00:29:55.442 [2024-11-19 13:21:58.736941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.442 [2024-11-19 13:21:58.780138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:55.701 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:55.701 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:55.701 13:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:55.958 Nvme0n1 00:29:55.958 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:55.958 [ 00:29:55.958 { 00:29:55.958 "name": "Nvme0n1", 00:29:55.958 "aliases": [ 00:29:55.958 "f41e9439-fcda-4dcd-b99e-b841f509a2d8" 00:29:55.958 ], 00:29:55.958 "product_name": "NVMe disk", 00:29:55.958 "block_size": 4096, 00:29:55.958 "num_blocks": 38912, 00:29:55.958 "uuid": "f41e9439-fcda-4dcd-b99e-b841f509a2d8", 00:29:55.958 "numa_id": 1, 00:29:55.958 "assigned_rate_limits": { 00:29:55.958 "rw_ios_per_sec": 0, 00:29:55.958 "rw_mbytes_per_sec": 0, 00:29:55.958 "r_mbytes_per_sec": 0, 00:29:55.958 "w_mbytes_per_sec": 0 00:29:55.958 }, 00:29:55.958 "claimed": false, 00:29:55.958 "zoned": false, 00:29:55.958 "supported_io_types": { 00:29:55.958 "read": true, 00:29:55.958 "write": true, 00:29:55.958 "unmap": true, 00:29:55.958 "flush": true, 00:29:55.958 "reset": true, 00:29:55.958 "nvme_admin": true, 00:29:55.958 "nvme_io": true, 00:29:55.958 "nvme_io_md": false, 00:29:55.958 "write_zeroes": true, 00:29:55.958 "zcopy": false, 00:29:55.958 "get_zone_info": false, 00:29:55.958 "zone_management": false, 00:29:55.958 "zone_append": false, 00:29:55.958 "compare": true, 00:29:55.958 "compare_and_write": true, 00:29:55.958 "abort": true, 00:29:55.958 "seek_hole": false, 00:29:55.958 "seek_data": false, 00:29:55.958 "copy": true, 00:29:55.958 "nvme_iov_md": false 00:29:55.958 }, 00:29:55.958 "memory_domains": [ 00:29:55.958 { 00:29:55.958 "dma_device_id": "system", 00:29:55.958 "dma_device_type": 1 00:29:55.958 } 00:29:55.958 ], 00:29:55.958 "driver_specific": { 00:29:55.958 "nvme": [ 00:29:55.958 { 00:29:55.958 "trid": { 00:29:55.958 "trtype": "TCP", 00:29:55.958 "adrfam": "IPv4", 00:29:55.958 "traddr": "10.0.0.2", 00:29:55.958 "trsvcid": "4420", 00:29:55.958 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:55.958 }, 00:29:55.958 "ctrlr_data": { 00:29:55.958 "cntlid": 1, 00:29:55.958 "vendor_id": "0x8086", 00:29:55.958 "model_number": "SPDK bdev Controller", 00:29:55.958 "serial_number": "SPDK0", 00:29:55.958 "firmware_revision": "25.01", 00:29:55.958 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:55.958 "oacs": { 00:29:55.958 "security": 0, 00:29:55.958 "format": 0, 00:29:55.958 "firmware": 0, 00:29:55.958 "ns_manage": 0 00:29:55.958 }, 00:29:55.958 "multi_ctrlr": true, 00:29:55.959 "ana_reporting": false 00:29:55.959 }, 00:29:55.959 "vs": { 00:29:55.959 "nvme_version": "1.3" 00:29:55.959 }, 00:29:55.959 "ns_data": { 00:29:55.959 "id": 1, 00:29:55.959 "can_share": true 00:29:55.959 } 00:29:55.959 } 00:29:55.959 ], 00:29:55.959 "mp_policy": "active_passive" 00:29:55.959 } 00:29:55.959 } 00:29:55.959 ] 00:29:55.959 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3039466 00:29:55.959 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:55.959 13:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:56.219 Running I/O for 10 seconds... 00:29:57.270 Latency(us) 00:29:57.270 [2024-11-19T12:22:00.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.270 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:57.270 Nvme0n1 : 1.00 21971.00 85.82 0.00 0.00 0.00 0.00 0.00 00:29:57.270 [2024-11-19T12:22:00.647Z] =================================================================================================================== 00:29:57.270 [2024-11-19T12:22:00.647Z] Total : 21971.00 85.82 0.00 0.00 0.00 0.00 0.00 00:29:57.270 00:29:58.205 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4170c694-781c-4689-8bb1-a0b381ff008f 00:29:58.205 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:58.205 Nvme0n1 : 2.00 22415.50 87.56 0.00 0.00 0.00 0.00 0.00 00:29:58.205 [2024-11-19T12:22:01.582Z] =================================================================================================================== 00:29:58.205 [2024-11-19T12:22:01.582Z] Total : 22415.50 87.56 0.00 0.00 0.00 0.00 0.00 00:29:58.205 00:29:58.205 true 00:29:58.205 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4170c694-781c-4689-8bb1-a0b381ff008f 00:29:58.205 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:58.464 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:58.464 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:58.464 13:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3039466 00:29:59.033 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:59.033 Nvme0n1 : 3.00 22563.67 88.14 0.00 0.00 0.00 0.00 0.00 00:29:59.033 [2024-11-19T12:22:02.410Z] =================================================================================================================== 00:29:59.033 [2024-11-19T12:22:02.410Z] Total : 22563.67 88.14 0.00 0.00 0.00 0.00 0.00 00:29:59.033 00:30:00.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:00.411 Nvme0n1 : 4.00 22669.50 88.55 0.00 0.00 0.00 0.00 0.00 00:30:00.411 [2024-11-19T12:22:03.788Z] =================================================================================================================== 00:30:00.411 [2024-11-19T12:22:03.788Z] Total : 22669.50 88.55 0.00 0.00 0.00 0.00 0.00 00:30:00.411 00:30:01.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:01.352 Nvme0n1 : 5.00 22745.80 88.85 0.00 0.00 0.00 0.00 0.00 00:30:01.352 [2024-11-19T12:22:04.729Z] =================================================================================================================== 00:30:01.352 [2024-11-19T12:22:04.729Z] Total : 22745.80 88.85 0.00 0.00 0.00 0.00 0.00 00:30:01.352 00:30:02.290 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:02.290 Nvme0n1 : 6.00 22802.17 89.07 0.00 0.00 0.00 0.00 0.00 00:30:02.290 [2024-11-19T12:22:05.667Z] =================================================================================================================== 00:30:02.290 [2024-11-19T12:22:05.667Z] Total : 22802.17 89.07 0.00 0.00 0.00 0.00 0.00 00:30:02.290 00:30:03.229 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:03.229 Nvme0n1 : 7.00 22846.71 89.24 0.00 0.00 0.00 0.00 0.00 00:30:03.229 [2024-11-19T12:22:06.606Z] =================================================================================================================== 00:30:03.229 [2024-11-19T12:22:06.606Z] Total : 22846.71 89.24 0.00 0.00 0.00 0.00 0.00 00:30:03.229 00:30:04.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:04.167 Nvme0n1 : 8.00 22880.12 89.38 0.00 0.00 0.00 0.00 0.00 00:30:04.167 [2024-11-19T12:22:07.544Z] =================================================================================================================== 00:30:04.167 [2024-11-19T12:22:07.544Z] Total : 22880.12 89.38 0.00 0.00 0.00 0.00 0.00 00:30:04.167 00:30:05.107 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:05.107 Nvme0n1 : 9.00 22913.22 89.50 0.00 0.00 0.00 0.00 0.00 00:30:05.107 [2024-11-19T12:22:08.484Z] =================================================================================================================== 00:30:05.107 [2024-11-19T12:22:08.484Z] Total : 22913.22 89.50 0.00 0.00 0.00 0.00 0.00 00:30:05.107 00:30:06.053 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:06.053 Nvme0n1 : 10.00 22933.50 89.58 0.00 0.00 0.00 0.00 0.00 00:30:06.053 [2024-11-19T12:22:09.430Z] =================================================================================================================== 00:30:06.053 [2024-11-19T12:22:09.430Z] Total : 22933.50 89.58 0.00 0.00 0.00 0.00 0.00 00:30:06.053 00:30:06.053 00:30:06.053 Latency(us) 00:30:06.053 [2024-11-19T12:22:09.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.053 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:06.053 Nvme0n1 : 10.00 22937.74 89.60 0.00 0.00 5577.29 3205.57 26328.38 00:30:06.053 [2024-11-19T12:22:09.430Z] =================================================================================================================== 00:30:06.053 [2024-11-19T12:22:09.430Z] Total : 22937.74 89.60 0.00 0.00 5577.29 3205.57 26328.38 00:30:06.053 { 00:30:06.053 "results": [ 00:30:06.053 { 00:30:06.053 "job": "Nvme0n1", 00:30:06.053 "core_mask": "0x2", 00:30:06.053 "workload": "randwrite", 00:30:06.053 "status": "finished", 00:30:06.053 "queue_depth": 128, 00:30:06.053 "io_size": 4096, 00:30:06.053 "runtime": 10.002991, 00:30:06.053 "iops": 22937.739322168738, 00:30:06.053 "mibps": 89.60054422722163, 00:30:06.053 "io_failed": 0, 00:30:06.053 "io_timeout": 0, 00:30:06.053 "avg_latency_us": 5577.293131546724, 00:30:06.053 "min_latency_us": 3205.5652173913045, 00:30:06.053 "max_latency_us": 26328.375652173912 00:30:06.053 } 00:30:06.053 ], 00:30:06.053 "core_count": 1 00:30:06.053 } 00:30:06.311 13:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3039273 00:30:06.311 13:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3039273 ']' 00:30:06.311 13:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3039273 00:30:06.311 13:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:30:06.311 13:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:06.311 13:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3039273 00:30:06.311 13:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:06.312 13:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:06.312 13:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3039273' 00:30:06.312 killing process with pid 3039273 00:30:06.312 13:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3039273 00:30:06.312 Received shutdown signal, test time was about 10.000000 seconds 00:30:06.312 00:30:06.312 Latency(us) 00:30:06.312 [2024-11-19T12:22:09.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.312 [2024-11-19T12:22:09.689Z] =================================================================================================================== 00:30:06.312 [2024-11-19T12:22:09.689Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:06.312 13:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3039273 00:30:06.312 13:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:06.571 13:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:06.830 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4170c694-781c-4689-8bb1-a0b381ff008f 00:30:06.830 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:07.090 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:07.090 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:07.090 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:07.090 [2024-11-19 13:22:10.432859] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:07.350 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4170c694-781c-4689-8bb1-a0b381ff008f 00:30:07.350 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:30:07.350 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4170c694-781c-4689-8bb1-a0b381ff008f 00:30:07.350 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:07.350 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:07.350 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:07.350 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:07.350 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:07.350 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:07.350 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:07.350 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:07.350 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4170c694-781c-4689-8bb1-a0b381ff008f 00:30:07.350 request: 00:30:07.350 { 00:30:07.350 "uuid": "4170c694-781c-4689-8bb1-a0b381ff008f", 00:30:07.350 "method": "bdev_lvol_get_lvstores", 00:30:07.350 "req_id": 1 00:30:07.350 } 00:30:07.350 Got JSON-RPC error response 00:30:07.350 response: 00:30:07.350 { 00:30:07.350 "code": -19, 00:30:07.350 "message": "No such device" 00:30:07.350 } 00:30:07.350 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:07.350 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:07.350 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:07.350 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:07.350 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:07.610 aio_bdev 00:30:07.610 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f41e9439-fcda-4dcd-b99e-b841f509a2d8 00:30:07.610 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=f41e9439-fcda-4dcd-b99e-b841f509a2d8 00:30:07.610 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:07.610 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:07.610 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:07.610 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:07.610 13:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:07.869 13:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f41e9439-fcda-4dcd-b99e-b841f509a2d8 -t 2000 00:30:08.129 [ 00:30:08.129 { 00:30:08.129 "name": "f41e9439-fcda-4dcd-b99e-b841f509a2d8", 00:30:08.129 "aliases": [ 00:30:08.129 "lvs/lvol" 00:30:08.129 ], 00:30:08.129 "product_name": "Logical Volume", 00:30:08.129 "block_size": 4096, 00:30:08.129 "num_blocks": 38912, 00:30:08.129 "uuid": "f41e9439-fcda-4dcd-b99e-b841f509a2d8", 00:30:08.129 "assigned_rate_limits": { 00:30:08.129 "rw_ios_per_sec": 0, 00:30:08.129 "rw_mbytes_per_sec": 0, 00:30:08.129 "r_mbytes_per_sec": 0, 00:30:08.129 "w_mbytes_per_sec": 0 00:30:08.129 }, 00:30:08.129 "claimed": false, 00:30:08.129 "zoned": false, 00:30:08.129 "supported_io_types": { 00:30:08.129 "read": true, 00:30:08.129 "write": true, 00:30:08.129 "unmap": true, 00:30:08.129 "flush": false, 00:30:08.129 "reset": true, 00:30:08.129 "nvme_admin": false, 00:30:08.129 "nvme_io": false, 00:30:08.129 "nvme_io_md": false, 00:30:08.129 "write_zeroes": true, 00:30:08.129 "zcopy": false, 00:30:08.129 "get_zone_info": false, 00:30:08.129 "zone_management": false, 00:30:08.129 "zone_append": false, 00:30:08.129 "compare": false, 00:30:08.129 "compare_and_write": false, 00:30:08.129 "abort": false, 00:30:08.129 "seek_hole": true, 00:30:08.129 "seek_data": true, 00:30:08.129 "copy": false, 00:30:08.129 "nvme_iov_md": false 00:30:08.129 }, 00:30:08.129 "driver_specific": { 00:30:08.129 "lvol": { 00:30:08.129 "lvol_store_uuid": "4170c694-781c-4689-8bb1-a0b381ff008f", 00:30:08.129 "base_bdev": "aio_bdev", 00:30:08.129 "thin_provision": false, 00:30:08.129 "num_allocated_clusters": 38, 00:30:08.129 "snapshot": false, 00:30:08.129 "clone": false, 00:30:08.129 "esnap_clone": false 00:30:08.129 } 00:30:08.129 } 00:30:08.129 } 00:30:08.129 ] 00:30:08.129 13:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:30:08.129 13:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4170c694-781c-4689-8bb1-a0b381ff008f 00:30:08.129 13:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:08.129 13:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:08.129 13:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4170c694-781c-4689-8bb1-a0b381ff008f 00:30:08.129 13:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:08.388 13:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:08.388 13:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f41e9439-fcda-4dcd-b99e-b841f509a2d8 00:30:08.648 13:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4170c694-781c-4689-8bb1-a0b381ff008f 00:30:08.907 13:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:09.166 13:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:09.166 00:30:09.166 real 0m15.680s 00:30:09.166 user 0m15.114s 00:30:09.166 sys 0m1.536s 00:30:09.166 13:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:09.166 13:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:09.166 ************************************ 00:30:09.166 END TEST lvs_grow_clean 00:30:09.166 ************************************ 00:30:09.166 13:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:09.166 13:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:09.166 13:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:09.166 13:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:09.166 ************************************ 00:30:09.166 START TEST lvs_grow_dirty 00:30:09.166 ************************************ 00:30:09.166 13:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:30:09.166 13:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:09.166 13:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:09.166 13:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:09.166 13:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:09.166 13:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:09.166 13:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:09.166 13:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:09.166 13:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:09.167 13:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:09.426 13:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:09.426 13:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:09.685 13:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=cb8eb9bf-3e5c-467b-a2b7-4c1f9864e81c 00:30:09.685 13:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb8eb9bf-3e5c-467b-a2b7-4c1f9864e81c 00:30:09.685 13:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:09.685 13:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:09.685 13:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:09.685 13:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cb8eb9bf-3e5c-467b-a2b7-4c1f9864e81c lvol 150 00:30:09.944 13:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c54f8619-d5c6-456e-9d90-915fe0523553 00:30:09.944 13:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:09.944 13:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:10.203 [2024-11-19 13:22:13.392788] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:10.203 [2024-11-19 13:22:13.392922] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:10.203 true 00:30:10.203 13:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:10.203 13:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb8eb9bf-3e5c-467b-a2b7-4c1f9864e81c 00:30:10.462 13:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:10.462 13:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:10.462 13:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c54f8619-d5c6-456e-9d90-915fe0523553 00:30:10.722 13:22:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:10.981 [2024-11-19 13:22:14.161217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:10.981 13:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:11.240 13:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:11.240 13:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3041841 00:30:11.240 13:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:11.240 13:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3041841 /var/tmp/bdevperf.sock 00:30:11.240 13:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3041841 ']' 00:30:11.240 13:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:11.240 13:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:11.240 13:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:11.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:11.240 13:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:11.240 13:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:11.240 [2024-11-19 13:22:14.407390] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:30:11.240 [2024-11-19 13:22:14.407434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3041841 ] 00:30:11.240 [2024-11-19 13:22:14.483115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.240 [2024-11-19 13:22:14.526931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.499 13:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:11.499 13:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:11.499 13:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:11.758 Nvme0n1 00:30:11.758 13:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:11.758 [ 00:30:11.758 { 00:30:11.758 "name": "Nvme0n1", 00:30:11.758 "aliases": [ 00:30:11.758 "c54f8619-d5c6-456e-9d90-915fe0523553" 00:30:11.758 ], 00:30:11.758 "product_name": "NVMe disk", 00:30:11.758 "block_size": 4096, 00:30:11.758 "num_blocks": 38912, 00:30:11.758 "uuid": "c54f8619-d5c6-456e-9d90-915fe0523553", 00:30:11.758 "numa_id": 1, 00:30:11.758 "assigned_rate_limits": { 00:30:11.758 "rw_ios_per_sec": 0, 00:30:11.758 "rw_mbytes_per_sec": 0, 00:30:11.758 "r_mbytes_per_sec": 0, 00:30:11.758 "w_mbytes_per_sec": 0 00:30:11.758 }, 00:30:11.758 "claimed": false, 00:30:11.758 "zoned": false, 00:30:11.758 "supported_io_types": { 00:30:11.758 "read": true, 00:30:11.758 "write": true, 00:30:11.758 "unmap": true, 00:30:11.758 "flush": true, 00:30:11.758 "reset": true, 00:30:11.758 "nvme_admin": true, 00:30:11.758 "nvme_io": true, 00:30:11.758 "nvme_io_md": false, 00:30:11.758 "write_zeroes": true, 00:30:11.758 "zcopy": false, 00:30:11.758 "get_zone_info": false, 00:30:11.758 "zone_management": false, 00:30:11.758 "zone_append": false, 00:30:11.758 "compare": true, 00:30:11.758 "compare_and_write": true, 00:30:11.758 "abort": true, 00:30:11.758 "seek_hole": false, 00:30:11.758 "seek_data": false, 00:30:11.758 "copy": true, 00:30:11.758 "nvme_iov_md": false 00:30:11.758 }, 00:30:11.758 "memory_domains": [ 00:30:11.758 { 00:30:11.758 "dma_device_id": "system", 00:30:11.758 "dma_device_type": 1 00:30:11.758 } 00:30:11.758 ], 00:30:11.758 "driver_specific": { 00:30:11.758 "nvme": [ 00:30:11.758 { 00:30:11.758 "trid": { 00:30:11.758 "trtype": "TCP", 00:30:11.758 "adrfam": "IPv4", 00:30:11.758 "traddr": "10.0.0.2", 00:30:11.758 "trsvcid": "4420", 00:30:11.758 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:11.758 }, 00:30:11.758 "ctrlr_data": { 00:30:11.758 "cntlid": 1, 00:30:11.758 "vendor_id": "0x8086", 00:30:11.758 "model_number": "SPDK bdev Controller", 00:30:11.758 "serial_number": "SPDK0", 00:30:11.758 "firmware_revision": "25.01", 00:30:11.758 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:11.758 "oacs": { 00:30:11.758 "security": 0, 00:30:11.758 "format": 0, 00:30:11.758 "firmware": 0, 00:30:11.758 "ns_manage": 0 00:30:11.758 }, 00:30:11.758 "multi_ctrlr": true, 00:30:11.758 "ana_reporting": false 00:30:11.758 }, 00:30:11.758 "vs": { 00:30:11.758 "nvme_version": "1.3" 00:30:11.758 }, 00:30:11.758 "ns_data": { 00:30:11.758 "id": 1, 00:30:11.758 "can_share": true 00:30:11.758 } 00:30:11.758 } 00:30:11.758 ], 00:30:11.758 "mp_policy": "active_passive" 00:30:11.758 } 00:30:11.758 } 00:30:11.758 ] 00:30:11.758 13:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:11.758 13:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3042058 00:30:11.758 13:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:12.017 Running I/O for 10 seconds... 00:30:12.953 Latency(us) 00:30:12.953 [2024-11-19T12:22:16.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.953 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:12.953 Nvme0n1 : 1.00 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:30:12.953 [2024-11-19T12:22:16.330Z] =================================================================================================================== 00:30:12.953 [2024-11-19T12:22:16.330Z] Total : 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:30:12.953 00:30:13.889 13:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cb8eb9bf-3e5c-467b-a2b7-4c1f9864e81c 00:30:13.889 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:13.889 Nvme0n1 : 2.00 22669.50 88.55 0.00 0.00 0.00 0.00 0.00 00:30:13.889 [2024-11-19T12:22:17.266Z] =================================================================================================================== 00:30:13.889 [2024-11-19T12:22:17.266Z] Total : 22669.50 88.55 0.00 0.00 0.00 0.00 0.00 00:30:13.889 00:30:14.147 true 00:30:14.147 13:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:14.148 13:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb8eb9bf-3e5c-467b-a2b7-4c1f9864e81c 00:30:14.407 13:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:14.407 13:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:14.407 13:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3042058 00:30:14.975 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:14.975 Nvme0n1 : 3.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:30:14.975 [2024-11-19T12:22:18.352Z] =================================================================================================================== 00:30:14.975 [2024-11-19T12:22:18.352Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:30:14.975 00:30:15.913 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:15.913 Nvme0n1 : 4.00 22828.25 89.17 0.00 0.00 0.00 0.00 0.00 00:30:15.913 [2024-11-19T12:22:19.290Z] =================================================================================================================== 00:30:15.913 [2024-11-19T12:22:19.290Z] Total : 22828.25 89.17 0.00 0.00 0.00 0.00 0.00 00:30:15.913 00:30:16.850 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:16.850 Nvme0n1 : 5.00 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:30:16.850 [2024-11-19T12:22:20.227Z] =================================================================================================================== 00:30:16.850 [2024-11-19T12:22:20.227Z] Total : 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:30:16.850 00:30:18.228 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:18.228 Nvme0n1 : 6.00 22902.33 89.46 0.00 0.00 0.00 0.00 0.00 00:30:18.228 [2024-11-19T12:22:21.605Z] =================================================================================================================== 00:30:18.228 [2024-11-19T12:22:21.605Z] Total : 22902.33 89.46 0.00 0.00 0.00 0.00 0.00 00:30:18.228 00:30:19.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:19.166 Nvme0n1 : 7.00 22941.71 89.62 0.00 0.00 0.00 0.00 0.00 00:30:19.166 [2024-11-19T12:22:22.543Z] =================================================================================================================== 00:30:19.166 [2024-11-19T12:22:22.543Z] Total : 22941.71 89.62 0.00 0.00 0.00 0.00 0.00 00:30:19.166 00:30:20.103 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:20.103 Nvme0n1 : 8.00 22961.38 89.69 0.00 0.00 0.00 0.00 0.00 00:30:20.103 [2024-11-19T12:22:23.480Z] =================================================================================================================== 00:30:20.103 [2024-11-19T12:22:23.480Z] Total : 22961.38 89.69 0.00 0.00 0.00 0.00 0.00 00:30:20.103 00:30:21.041 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:21.041 Nvme0n1 : 9.00 22978.33 89.76 0.00 0.00 0.00 0.00 0.00 00:30:21.041 [2024-11-19T12:22:24.418Z] =================================================================================================================== 00:30:21.041 [2024-11-19T12:22:24.418Z] Total : 22978.33 89.76 0.00 0.00 0.00 0.00 0.00 00:30:21.041 00:30:21.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:21.981 Nvme0n1 : 10.00 22991.90 89.81 0.00 0.00 0.00 0.00 0.00 00:30:21.981 [2024-11-19T12:22:25.358Z] =================================================================================================================== 00:30:21.981 [2024-11-19T12:22:25.358Z] Total : 22991.90 89.81 0.00 0.00 0.00 0.00 0.00 00:30:21.981 00:30:21.981 00:30:21.981 Latency(us) 00:30:21.981 [2024-11-19T12:22:25.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:21.981 Nvme0n1 : 10.00 22998.36 89.84 0.00 0.00 5562.68 3219.81 27468.13 00:30:21.981 [2024-11-19T12:22:25.358Z] =================================================================================================================== 00:30:21.981 [2024-11-19T12:22:25.358Z] Total : 22998.36 89.84 0.00 0.00 5562.68 3219.81 27468.13 00:30:21.981 { 00:30:21.981 "results": [ 00:30:21.981 { 00:30:21.981 "job": "Nvme0n1", 00:30:21.981 "core_mask": "0x2", 00:30:21.981 "workload": "randwrite", 00:30:21.981 "status": "finished", 00:30:21.981 "queue_depth": 128, 00:30:21.981 "io_size": 4096, 00:30:21.981 "runtime": 10.002755, 00:30:21.981 "iops": 22998.363950731575, 00:30:21.981 "mibps": 89.83735918254521, 00:30:21.981 "io_failed": 0, 00:30:21.981 "io_timeout": 0, 00:30:21.981 "avg_latency_us": 5562.680503526595, 00:30:21.981 "min_latency_us": 3219.8121739130434, 00:30:21.981 "max_latency_us": 27468.132173913044 00:30:21.981 } 00:30:21.981 ], 00:30:21.981 "core_count": 1 00:30:21.981 } 00:30:21.981 13:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3041841 00:30:21.981 13:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3041841 ']' 00:30:21.981 13:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3041841 00:30:21.981 13:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:30:21.981 13:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:21.981 13:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3041841 00:30:21.981 13:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:21.981 13:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:21.981 13:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3041841' 00:30:21.981 killing process with pid 3041841 00:30:21.981 13:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3041841 00:30:21.981 Received shutdown signal, test time was about 10.000000 seconds 00:30:21.981 00:30:21.981 Latency(us) 00:30:21.981 [2024-11-19T12:22:25.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.981 [2024-11-19T12:22:25.358Z] =================================================================================================================== 00:30:21.981 [2024-11-19T12:22:25.358Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:21.981 13:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3041841 00:30:22.241 13:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:22.500 13:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:22.760 13:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb8eb9bf-3e5c-467b-a2b7-4c1f9864e81c 00:30:22.760 13:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:22.760 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:22.760 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:22.760 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3038947 00:30:22.760 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3038947 00:30:23.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3038947 Killed "${NVMF_APP[@]}" "$@" 00:30:23.020 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:23.020 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:23.020 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:23.020 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:23.020 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:23.020 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3043836 00:30:23.020 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3043836 00:30:23.020 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:23.021 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3043836 ']' 00:30:23.021 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:23.021 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:23.021 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:23.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:23.021 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:23.021 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:23.021 [2024-11-19 13:22:26.198489] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:23.021 [2024-11-19 13:22:26.199393] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:30:23.021 [2024-11-19 13:22:26.199431] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:23.021 [2024-11-19 13:22:26.280830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.021 [2024-11-19 13:22:26.319848] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:23.021 [2024-11-19 13:22:26.319883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:23.021 [2024-11-19 13:22:26.319891] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:23.021 [2024-11-19 13:22:26.319897] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:23.021 [2024-11-19 13:22:26.319902] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:23.021 [2024-11-19 13:22:26.320434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.021 [2024-11-19 13:22:26.387619] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:23.021 [2024-11-19 13:22:26.387829] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:23.281 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:23.281 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:23.281 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:23.281 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:23.281 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:23.281 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:23.281 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:23.281 [2024-11-19 13:22:26.629899] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:23.281 [2024-11-19 13:22:26.630108] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:23.281 [2024-11-19 13:22:26.630192] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:23.540 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:23.540 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c54f8619-d5c6-456e-9d90-915fe0523553 00:30:23.540 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c54f8619-d5c6-456e-9d90-915fe0523553 00:30:23.540 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:23.540 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:23.540 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:23.540 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:23.540 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:23.540 13:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c54f8619-d5c6-456e-9d90-915fe0523553 -t 2000 00:30:23.799 [ 00:30:23.799 { 00:30:23.800 "name": "c54f8619-d5c6-456e-9d90-915fe0523553", 00:30:23.800 "aliases": [ 00:30:23.800 "lvs/lvol" 00:30:23.800 ], 00:30:23.800 "product_name": "Logical Volume", 00:30:23.800 "block_size": 4096, 00:30:23.800 "num_blocks": 38912, 00:30:23.800 "uuid": "c54f8619-d5c6-456e-9d90-915fe0523553", 00:30:23.800 "assigned_rate_limits": { 00:30:23.800 "rw_ios_per_sec": 0, 00:30:23.800 "rw_mbytes_per_sec": 0, 00:30:23.800 "r_mbytes_per_sec": 0, 00:30:23.800 "w_mbytes_per_sec": 0 00:30:23.800 }, 00:30:23.800 "claimed": false, 00:30:23.800 "zoned": false, 00:30:23.800 "supported_io_types": { 00:30:23.800 "read": true, 00:30:23.800 "write": true, 00:30:23.800 "unmap": true, 00:30:23.800 "flush": false, 00:30:23.800 "reset": true, 00:30:23.800 "nvme_admin": false, 00:30:23.800 "nvme_io": false, 00:30:23.800 "nvme_io_md": false, 00:30:23.800 "write_zeroes": true, 00:30:23.800 "zcopy": false, 00:30:23.800 "get_zone_info": false, 00:30:23.800 "zone_management": false, 00:30:23.800 "zone_append": false, 00:30:23.800 "compare": false, 00:30:23.800 "compare_and_write": false, 00:30:23.800 "abort": false, 00:30:23.800 "seek_hole": true, 00:30:23.800 "seek_data": true, 00:30:23.800 "copy": false, 00:30:23.800 "nvme_iov_md": false 00:30:23.800 }, 00:30:23.800 "driver_specific": { 00:30:23.800 "lvol": { 00:30:23.800 "lvol_store_uuid": "cb8eb9bf-3e5c-467b-a2b7-4c1f9864e81c", 00:30:23.800 "base_bdev": "aio_bdev", 00:30:23.800 "thin_provision": false, 00:30:23.800 "num_allocated_clusters": 38, 00:30:23.800 "snapshot": false, 00:30:23.800 "clone": false, 00:30:23.800 "esnap_clone": false 00:30:23.800 } 00:30:23.800 } 00:30:23.800 } 00:30:23.800 ] 00:30:23.800 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:23.800 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:23.800 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb8eb9bf-3e5c-467b-a2b7-4c1f9864e81c 00:30:24.059 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:24.059 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb8eb9bf-3e5c-467b-a2b7-4c1f9864e81c 00:30:24.059 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:24.317 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:24.317 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:24.317 [2024-11-19 13:22:27.620901] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:24.317 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb8eb9bf-3e5c-467b-a2b7-4c1f9864e81c 00:30:24.317 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:30:24.317 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb8eb9bf-3e5c-467b-a2b7-4c1f9864e81c 00:30:24.317 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:24.317 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:24.317 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:24.317 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:24.317 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:24.317 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:24.317 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:24.317 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:24.317 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb8eb9bf-3e5c-467b-a2b7-4c1f9864e81c 00:30:24.575 request: 00:30:24.575 { 00:30:24.575 "uuid": "cb8eb9bf-3e5c-467b-a2b7-4c1f9864e81c", 00:30:24.575 "method": "bdev_lvol_get_lvstores", 00:30:24.575 "req_id": 1 00:30:24.575 } 00:30:24.575 Got JSON-RPC error response 00:30:24.575 response: 00:30:24.575 { 00:30:24.575 "code": -19, 00:30:24.575 "message": "No such device" 00:30:24.575 } 00:30:24.575 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:30:24.575 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:24.575 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:24.575 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:24.575 13:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:24.835 aio_bdev 00:30:24.835 13:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c54f8619-d5c6-456e-9d90-915fe0523553 00:30:24.835 13:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c54f8619-d5c6-456e-9d90-915fe0523553 00:30:24.835 13:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:24.835 13:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:24.835 13:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:24.835 13:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:24.835 13:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:25.095 13:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c54f8619-d5c6-456e-9d90-915fe0523553 -t 2000 00:30:25.095 [ 00:30:25.095 { 00:30:25.095 "name": "c54f8619-d5c6-456e-9d90-915fe0523553", 00:30:25.095 "aliases": [ 00:30:25.095 "lvs/lvol" 00:30:25.095 ], 00:30:25.095 "product_name": "Logical Volume", 00:30:25.095 "block_size": 4096, 00:30:25.095 "num_blocks": 38912, 00:30:25.095 "uuid": "c54f8619-d5c6-456e-9d90-915fe0523553", 00:30:25.095 "assigned_rate_limits": { 00:30:25.095 "rw_ios_per_sec": 0, 00:30:25.095 "rw_mbytes_per_sec": 0, 00:30:25.095 "r_mbytes_per_sec": 0, 00:30:25.095 "w_mbytes_per_sec": 0 00:30:25.095 }, 00:30:25.095 "claimed": false, 00:30:25.095 "zoned": false, 00:30:25.095 "supported_io_types": { 00:30:25.095 "read": true, 00:30:25.095 "write": true, 00:30:25.095 "unmap": true, 00:30:25.095 "flush": false, 00:30:25.095 "reset": true, 00:30:25.095 "nvme_admin": false, 00:30:25.095 "nvme_io": false, 00:30:25.095 "nvme_io_md": false, 00:30:25.095 "write_zeroes": true, 00:30:25.095 "zcopy": false, 00:30:25.095 "get_zone_info": false, 00:30:25.095 "zone_management": false, 00:30:25.095 "zone_append": false, 00:30:25.095 "compare": false, 00:30:25.095 "compare_and_write": false, 00:30:25.095 "abort": false, 00:30:25.095 "seek_hole": true, 00:30:25.095 "seek_data": true, 00:30:25.095 "copy": false, 00:30:25.095 "nvme_iov_md": false 00:30:25.095 }, 00:30:25.095 "driver_specific": { 00:30:25.095 "lvol": { 00:30:25.095 "lvol_store_uuid": "cb8eb9bf-3e5c-467b-a2b7-4c1f9864e81c", 00:30:25.095 "base_bdev": "aio_bdev", 00:30:25.095 "thin_provision": false, 00:30:25.095 "num_allocated_clusters": 38, 00:30:25.095 "snapshot": false, 00:30:25.095 "clone": false, 00:30:25.095 "esnap_clone": false 00:30:25.095 } 00:30:25.095 } 00:30:25.095 } 00:30:25.095 ] 00:30:25.095 13:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:25.095 13:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb8eb9bf-3e5c-467b-a2b7-4c1f9864e81c 00:30:25.095 13:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:25.354 13:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:25.354 13:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cb8eb9bf-3e5c-467b-a2b7-4c1f9864e81c 00:30:25.354 13:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:25.614 13:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:25.614 13:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c54f8619-d5c6-456e-9d90-915fe0523553 00:30:25.874 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cb8eb9bf-3e5c-467b-a2b7-4c1f9864e81c 00:30:26.132 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:26.132 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:26.132 00:30:26.132 real 0m17.092s 00:30:26.132 user 0m34.493s 00:30:26.132 sys 0m3.787s 00:30:26.132 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:26.132 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:26.132 ************************************ 00:30:26.132 END TEST lvs_grow_dirty 00:30:26.132 ************************************ 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:26.391 nvmf_trace.0 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:26.391 rmmod nvme_tcp 00:30:26.391 rmmod nvme_fabrics 00:30:26.391 rmmod nvme_keyring 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3043836 ']' 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3043836 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3043836 ']' 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3043836 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3043836 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3043836' 00:30:26.391 killing process with pid 3043836 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3043836 00:30:26.391 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3043836 00:30:26.650 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:26.650 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:26.650 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:26.650 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:26.650 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:30:26.650 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:26.650 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:30:26.650 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:26.650 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:26.650 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.650 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.650 13:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.557 13:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:28.557 00:30:28.557 real 0m41.961s 00:30:28.557 user 0m52.159s 00:30:28.557 sys 0m10.178s 00:30:28.557 13:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:28.557 13:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:28.557 ************************************ 00:30:28.557 END TEST nvmf_lvs_grow 00:30:28.557 ************************************ 00:30:28.817 13:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:28.817 13:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:28.817 13:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:28.817 13:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:28.817 ************************************ 00:30:28.817 START TEST nvmf_bdev_io_wait 00:30:28.817 ************************************ 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:28.817 * Looking for test storage... 00:30:28.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:28.817 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:28.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.818 --rc genhtml_branch_coverage=1 00:30:28.818 --rc genhtml_function_coverage=1 00:30:28.818 --rc genhtml_legend=1 00:30:28.818 --rc geninfo_all_blocks=1 00:30:28.818 --rc geninfo_unexecuted_blocks=1 00:30:28.818 00:30:28.818 ' 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:28.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.818 --rc genhtml_branch_coverage=1 00:30:28.818 --rc genhtml_function_coverage=1 00:30:28.818 --rc genhtml_legend=1 00:30:28.818 --rc geninfo_all_blocks=1 00:30:28.818 --rc geninfo_unexecuted_blocks=1 00:30:28.818 00:30:28.818 ' 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:28.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.818 --rc genhtml_branch_coverage=1 00:30:28.818 --rc genhtml_function_coverage=1 00:30:28.818 --rc genhtml_legend=1 00:30:28.818 --rc geninfo_all_blocks=1 00:30:28.818 --rc geninfo_unexecuted_blocks=1 00:30:28.818 00:30:28.818 ' 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:28.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.818 --rc genhtml_branch_coverage=1 00:30:28.818 --rc genhtml_function_coverage=1 00:30:28.818 --rc genhtml_legend=1 00:30:28.818 --rc geninfo_all_blocks=1 00:30:28.818 --rc geninfo_unexecuted_blocks=1 00:30:28.818 00:30:28.818 ' 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:28.818 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:29.078 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:29.079 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:30:29.079 13:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:35.652 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:35.652 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:35.652 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:35.652 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:35.652 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:35.652 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:35.652 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:35.652 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:35.652 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:35.652 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:35.652 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:35.652 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:35.652 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:35.652 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:35.652 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:35.652 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:35.652 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:35.652 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:35.652 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:35.652 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:35.652 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:35.652 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:35.652 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:35.653 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:35.653 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:35.653 Found net devices under 0000:86:00.0: cvl_0_0 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:35.653 Found net devices under 0000:86:00.1: cvl_0_1 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:35.653 13:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:35.653 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:35.653 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:35.653 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:35.653 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:35.653 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:35.653 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:35.653 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:35.653 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:35.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:35.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:30:35.653 00:30:35.653 --- 10.0.0.2 ping statistics --- 00:30:35.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.653 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:30:35.653 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:35.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:35.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:30:35.653 00:30:35.653 --- 10.0.0.1 ping statistics --- 00:30:35.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.653 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:30:35.653 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:35.653 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:35.653 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:35.653 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:35.653 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:35.653 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:35.653 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:35.653 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:35.653 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:35.653 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:35.653 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:35.653 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:35.653 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:35.653 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3047947 00:30:35.654 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3047947 00:30:35.654 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:35.654 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3047947 ']' 00:30:35.654 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.654 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:35.654 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.654 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:35.654 13:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:35.654 [2024-11-19 13:22:38.236405] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:35.654 [2024-11-19 13:22:38.237345] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:30:35.654 [2024-11-19 13:22:38.237377] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.654 [2024-11-19 13:22:38.316853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:35.654 [2024-11-19 13:22:38.361491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:35.654 [2024-11-19 13:22:38.361528] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:35.654 [2024-11-19 13:22:38.361535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:35.654 [2024-11-19 13:22:38.361541] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:35.654 [2024-11-19 13:22:38.361546] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:35.654 [2024-11-19 13:22:38.363056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.654 [2024-11-19 13:22:38.363164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:35.654 [2024-11-19 13:22:38.363268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.654 [2024-11-19 13:22:38.363270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:35.654 [2024-11-19 13:22:38.363603] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:35.914 [2024-11-19 13:22:39.176211] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:35.914 [2024-11-19 13:22:39.176676] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:35.914 [2024-11-19 13:22:39.176899] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:35.914 [2024-11-19 13:22:39.177058] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:35.914 [2024-11-19 13:22:39.187985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:35.914 Malloc0 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:35.914 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:35.915 [2024-11-19 13:22:39.260299] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3048136 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3048139 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:35.915 { 00:30:35.915 "params": { 00:30:35.915 "name": "Nvme$subsystem", 00:30:35.915 "trtype": "$TEST_TRANSPORT", 00:30:35.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.915 "adrfam": "ipv4", 00:30:35.915 "trsvcid": "$NVMF_PORT", 00:30:35.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.915 "hdgst": ${hdgst:-false}, 00:30:35.915 "ddgst": ${ddgst:-false} 00:30:35.915 }, 00:30:35.915 "method": "bdev_nvme_attach_controller" 00:30:35.915 } 00:30:35.915 EOF 00:30:35.915 )") 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3048141 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:35.915 { 00:30:35.915 "params": { 00:30:35.915 "name": "Nvme$subsystem", 00:30:35.915 "trtype": "$TEST_TRANSPORT", 00:30:35.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.915 "adrfam": "ipv4", 00:30:35.915 "trsvcid": "$NVMF_PORT", 00:30:35.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.915 "hdgst": ${hdgst:-false}, 00:30:35.915 "ddgst": ${ddgst:-false} 00:30:35.915 }, 00:30:35.915 "method": "bdev_nvme_attach_controller" 00:30:35.915 } 00:30:35.915 EOF 00:30:35.915 )") 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3048145 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:35.915 { 00:30:35.915 "params": { 00:30:35.915 "name": "Nvme$subsystem", 00:30:35.915 "trtype": "$TEST_TRANSPORT", 00:30:35.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.915 "adrfam": "ipv4", 00:30:35.915 "trsvcid": "$NVMF_PORT", 00:30:35.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.915 "hdgst": ${hdgst:-false}, 00:30:35.915 "ddgst": ${ddgst:-false} 00:30:35.915 }, 00:30:35.915 "method": "bdev_nvme_attach_controller" 00:30:35.915 } 00:30:35.915 EOF 00:30:35.915 )") 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:35.915 { 00:30:35.915 "params": { 00:30:35.915 "name": "Nvme$subsystem", 00:30:35.915 "trtype": "$TEST_TRANSPORT", 00:30:35.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.915 "adrfam": "ipv4", 00:30:35.915 "trsvcid": "$NVMF_PORT", 00:30:35.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.915 "hdgst": ${hdgst:-false}, 00:30:35.915 "ddgst": ${ddgst:-false} 00:30:35.915 }, 00:30:35.915 "method": "bdev_nvme_attach_controller" 00:30:35.915 } 00:30:35.915 EOF 00:30:35.915 )") 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3048136 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:35.915 "params": { 00:30:35.915 "name": "Nvme1", 00:30:35.915 "trtype": "tcp", 00:30:35.915 "traddr": "10.0.0.2", 00:30:35.915 "adrfam": "ipv4", 00:30:35.915 "trsvcid": "4420", 00:30:35.915 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:35.915 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:35.915 "hdgst": false, 00:30:35.915 "ddgst": false 00:30:35.915 }, 00:30:35.915 "method": "bdev_nvme_attach_controller" 00:30:35.915 }' 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:35.915 "params": { 00:30:35.915 "name": "Nvme1", 00:30:35.915 "trtype": "tcp", 00:30:35.915 "traddr": "10.0.0.2", 00:30:35.915 "adrfam": "ipv4", 00:30:35.915 "trsvcid": "4420", 00:30:35.915 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:35.915 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:35.915 "hdgst": false, 00:30:35.915 "ddgst": false 00:30:35.915 }, 00:30:35.915 "method": "bdev_nvme_attach_controller" 00:30:35.915 }' 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:35.915 "params": { 00:30:35.915 "name": "Nvme1", 00:30:35.915 "trtype": "tcp", 00:30:35.915 "traddr": "10.0.0.2", 00:30:35.915 "adrfam": "ipv4", 00:30:35.915 "trsvcid": "4420", 00:30:35.915 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:35.915 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:35.915 "hdgst": false, 00:30:35.915 "ddgst": false 00:30:35.915 }, 00:30:35.915 "method": "bdev_nvme_attach_controller" 00:30:35.915 }' 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:35.915 13:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:35.915 "params": { 00:30:35.915 "name": "Nvme1", 00:30:35.915 "trtype": "tcp", 00:30:35.915 "traddr": "10.0.0.2", 00:30:35.915 "adrfam": "ipv4", 00:30:35.915 "trsvcid": "4420", 00:30:35.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:35.916 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:35.916 "hdgst": false, 00:30:35.916 "ddgst": false 00:30:35.916 }, 00:30:35.916 "method": "bdev_nvme_attach_controller" 00:30:35.916 }' 00:30:36.174 [2024-11-19 13:22:39.309803] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:30:36.174 [2024-11-19 13:22:39.309855] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:36.174 [2024-11-19 13:22:39.312868] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:30:36.174 [2024-11-19 13:22:39.312909] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:36.174 [2024-11-19 13:22:39.314314] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:30:36.174 [2024-11-19 13:22:39.314363] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:36.174 [2024-11-19 13:22:39.314543] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:30:36.174 [2024-11-19 13:22:39.314583] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:36.174 [2024-11-19 13:22:39.491109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.174 [2024-11-19 13:22:39.534125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:36.433 [2024-11-19 13:22:39.589538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.433 [2024-11-19 13:22:39.632574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:36.433 [2024-11-19 13:22:39.689813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.433 [2024-11-19 13:22:39.742412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.433 [2024-11-19 13:22:39.745622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:36.433 [2024-11-19 13:22:39.785219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:36.692 Running I/O for 1 seconds... 00:30:36.692 Running I/O for 1 seconds... 00:30:36.692 Running I/O for 1 seconds... 00:30:36.692 Running I/O for 1 seconds... 00:30:37.629 11821.00 IOPS, 46.18 MiB/s 00:30:37.629 Latency(us) 00:30:37.629 [2024-11-19T12:22:41.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.629 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:37.629 Nvme1n1 : 1.01 11885.85 46.43 0.00 0.00 10734.54 1659.77 12879.25 00:30:37.629 [2024-11-19T12:22:41.006Z] =================================================================================================================== 00:30:37.629 [2024-11-19T12:22:41.006Z] Total : 11885.85 46.43 0.00 0.00 10734.54 1659.77 12879.25 00:30:37.629 11041.00 IOPS, 43.13 MiB/s 00:30:37.629 Latency(us) 00:30:37.629 [2024-11-19T12:22:41.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.629 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:37.629 Nvme1n1 : 1.01 11119.66 43.44 0.00 0.00 11478.38 1980.33 14417.92 00:30:37.629 [2024-11-19T12:22:41.006Z] =================================================================================================================== 00:30:37.629 [2024-11-19T12:22:41.006Z] Total : 11119.66 43.44 0.00 0.00 11478.38 1980.33 14417.92 00:30:37.629 10530.00 IOPS, 41.13 MiB/s 00:30:37.629 Latency(us) 00:30:37.629 [2024-11-19T12:22:41.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.629 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:37.629 Nvme1n1 : 1.01 10607.91 41.44 0.00 0.00 12034.42 3818.18 17552.25 00:30:37.629 [2024-11-19T12:22:41.006Z] =================================================================================================================== 00:30:37.629 [2024-11-19T12:22:41.006Z] Total : 10607.91 41.44 0.00 0.00 12034.42 3818.18 17552.25 00:30:37.629 245360.00 IOPS, 958.44 MiB/s 00:30:37.629 Latency(us) 00:30:37.629 [2024-11-19T12:22:41.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.629 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:37.629 Nvme1n1 : 1.00 244978.97 956.95 0.00 0.00 519.51 233.29 1538.67 00:30:37.629 [2024-11-19T12:22:41.006Z] =================================================================================================================== 00:30:37.629 [2024-11-19T12:22:41.006Z] Total : 244978.97 956.95 0.00 0.00 519.51 233.29 1538.67 00:30:37.629 13:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3048139 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3048141 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3048145 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:37.889 rmmod nvme_tcp 00:30:37.889 rmmod nvme_fabrics 00:30:37.889 rmmod nvme_keyring 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3047947 ']' 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3047947 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3047947 ']' 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3047947 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3047947 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3047947' 00:30:37.889 killing process with pid 3047947 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3047947 00:30:37.889 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3047947 00:30:38.149 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:38.149 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:38.149 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:38.149 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:38.149 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:38.149 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:38.149 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:38.149 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:38.149 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:38.149 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.149 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.149 13:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.056 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:40.056 00:30:40.056 real 0m11.407s 00:30:40.056 user 0m14.554s 00:30:40.056 sys 0m6.597s 00:30:40.056 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:40.056 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:40.056 ************************************ 00:30:40.056 END TEST nvmf_bdev_io_wait 00:30:40.056 ************************************ 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:40.316 ************************************ 00:30:40.316 START TEST nvmf_queue_depth 00:30:40.316 ************************************ 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:40.316 * Looking for test storage... 00:30:40.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:40.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.316 --rc genhtml_branch_coverage=1 00:30:40.316 --rc genhtml_function_coverage=1 00:30:40.316 --rc genhtml_legend=1 00:30:40.316 --rc geninfo_all_blocks=1 00:30:40.316 --rc geninfo_unexecuted_blocks=1 00:30:40.316 00:30:40.316 ' 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:40.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.316 --rc genhtml_branch_coverage=1 00:30:40.316 --rc genhtml_function_coverage=1 00:30:40.316 --rc genhtml_legend=1 00:30:40.316 --rc geninfo_all_blocks=1 00:30:40.316 --rc geninfo_unexecuted_blocks=1 00:30:40.316 00:30:40.316 ' 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:40.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.316 --rc genhtml_branch_coverage=1 00:30:40.316 --rc genhtml_function_coverage=1 00:30:40.316 --rc genhtml_legend=1 00:30:40.316 --rc geninfo_all_blocks=1 00:30:40.316 --rc geninfo_unexecuted_blocks=1 00:30:40.316 00:30:40.316 ' 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:40.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.316 --rc genhtml_branch_coverage=1 00:30:40.316 --rc genhtml_function_coverage=1 00:30:40.316 --rc genhtml_legend=1 00:30:40.316 --rc geninfo_all_blocks=1 00:30:40.316 --rc geninfo_unexecuted_blocks=1 00:30:40.316 00:30:40.316 ' 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.316 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:40.317 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.576 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:40.576 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:40.576 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:40.576 13:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:46.016 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:46.016 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:46.016 Found net devices under 0000:86:00.0: cvl_0_0 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:46.016 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:46.016 Found net devices under 0000:86:00.1: cvl_0_1 00:30:46.017 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:46.017 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:46.017 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:46.017 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:46.017 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:46.017 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:46.017 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:46.017 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:46.017 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:46.017 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:46.017 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:46.017 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:46.017 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:46.017 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:46.017 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:46.017 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:46.017 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:46.017 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:46.017 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:46.017 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:46.017 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:46.275 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:46.275 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:46.275 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:46.275 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:46.275 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:46.275 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:46.275 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:46.275 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:46.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:46.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:30:46.275 00:30:46.275 --- 10.0.0.2 ping statistics --- 00:30:46.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.275 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:30:46.275 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:46.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:46.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:30:46.275 00:30:46.275 --- 10.0.0.1 ping statistics --- 00:30:46.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.275 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:30:46.275 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:46.276 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:46.276 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:46.276 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:46.276 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:46.276 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:46.276 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:46.276 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:46.276 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:46.276 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:46.276 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:46.276 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:46.276 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:46.276 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3051976 00:30:46.276 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:46.276 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3051976 00:30:46.276 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3051976 ']' 00:30:46.276 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:46.276 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:46.276 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:46.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:46.276 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:46.276 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:46.276 [2024-11-19 13:22:49.635887] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:46.276 [2024-11-19 13:22:49.636827] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:30:46.276 [2024-11-19 13:22:49.636860] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:46.535 [2024-11-19 13:22:49.716334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.535 [2024-11-19 13:22:49.757197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:46.535 [2024-11-19 13:22:49.757234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:46.535 [2024-11-19 13:22:49.757242] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:46.535 [2024-11-19 13:22:49.757248] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:46.535 [2024-11-19 13:22:49.757253] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:46.535 [2024-11-19 13:22:49.757790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.535 [2024-11-19 13:22:49.823330] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:46.535 [2024-11-19 13:22:49.823540] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:46.535 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:46.535 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:46.535 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:46.535 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:46.535 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:46.535 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:46.535 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:46.535 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.535 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:46.535 [2024-11-19 13:22:49.894455] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:46.535 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.535 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:46.535 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.535 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:46.794 Malloc0 00:30:46.794 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.794 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:46.794 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.794 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:46.794 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.794 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:46.794 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.794 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:46.794 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.794 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:46.795 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.795 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:46.795 [2024-11-19 13:22:49.970575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:46.795 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.795 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3052004 00:30:46.795 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:46.795 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:46.795 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3052004 /var/tmp/bdevperf.sock 00:30:46.795 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3052004 ']' 00:30:46.795 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:46.795 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:46.795 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:46.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:46.795 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:46.795 13:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:46.795 [2024-11-19 13:22:50.021865] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:30:46.795 [2024-11-19 13:22:50.021908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3052004 ] 00:30:46.795 [2024-11-19 13:22:50.101499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.795 [2024-11-19 13:22:50.144589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.054 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:47.054 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:47.054 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:47.054 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.054 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:47.054 NVMe0n1 00:30:47.054 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.054 13:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:47.313 Running I/O for 10 seconds... 00:30:49.189 11264.00 IOPS, 44.00 MiB/s [2024-11-19T12:22:53.504Z] 11773.00 IOPS, 45.99 MiB/s [2024-11-19T12:22:54.883Z] 11944.67 IOPS, 46.66 MiB/s [2024-11-19T12:22:55.452Z] 12022.00 IOPS, 46.96 MiB/s [2024-11-19T12:22:56.841Z] 12010.20 IOPS, 46.91 MiB/s [2024-11-19T12:22:57.778Z] 12054.83 IOPS, 47.09 MiB/s [2024-11-19T12:22:58.715Z] 12099.71 IOPS, 47.26 MiB/s [2024-11-19T12:22:59.650Z] 12126.75 IOPS, 47.37 MiB/s [2024-11-19T12:23:00.588Z] 12142.78 IOPS, 47.43 MiB/s [2024-11-19T12:23:00.588Z] 12127.40 IOPS, 47.37 MiB/s 00:30:57.211 Latency(us) 00:30:57.211 [2024-11-19T12:23:00.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:57.211 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:57.212 Verification LBA range: start 0x0 length 0x4000 00:30:57.212 NVMe0n1 : 10.05 12151.44 47.47 0.00 0.00 83953.15 10884.67 55392.17 00:30:57.212 [2024-11-19T12:23:00.589Z] =================================================================================================================== 00:30:57.212 [2024-11-19T12:23:00.589Z] Total : 12151.44 47.47 0.00 0.00 83953.15 10884.67 55392.17 00:30:57.212 { 00:30:57.212 "results": [ 00:30:57.212 { 00:30:57.212 "job": "NVMe0n1", 00:30:57.212 "core_mask": "0x1", 00:30:57.212 "workload": "verify", 00:30:57.212 "status": "finished", 00:30:57.212 "verify_range": { 00:30:57.212 "start": 0, 00:30:57.212 "length": 16384 00:30:57.212 }, 00:30:57.212 "queue_depth": 1024, 00:30:57.212 "io_size": 4096, 00:30:57.212 "runtime": 10.052798, 00:30:57.212 "iops": 12151.442812239935, 00:30:57.212 "mibps": 47.466573485312246, 00:30:57.212 "io_failed": 0, 00:30:57.212 "io_timeout": 0, 00:30:57.212 "avg_latency_us": 83953.14622175209, 00:30:57.212 "min_latency_us": 10884.674782608696, 00:30:57.212 "max_latency_us": 55392.16695652174 00:30:57.212 } 00:30:57.212 ], 00:30:57.212 "core_count": 1 00:30:57.212 } 00:30:57.212 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3052004 00:30:57.212 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3052004 ']' 00:30:57.212 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3052004 00:30:57.212 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:57.212 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:57.212 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3052004 00:30:57.471 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:57.471 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:57.471 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3052004' 00:30:57.471 killing process with pid 3052004 00:30:57.471 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3052004 00:30:57.471 Received shutdown signal, test time was about 10.000000 seconds 00:30:57.471 00:30:57.471 Latency(us) 00:30:57.471 [2024-11-19T12:23:00.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:57.471 [2024-11-19T12:23:00.848Z] =================================================================================================================== 00:30:57.471 [2024-11-19T12:23:00.848Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:57.471 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3052004 00:30:57.471 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:57.471 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:57.471 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:57.471 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:57.471 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:57.471 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:57.471 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:57.471 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:57.471 rmmod nvme_tcp 00:30:57.471 rmmod nvme_fabrics 00:30:57.471 rmmod nvme_keyring 00:30:57.471 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:57.471 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:57.471 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:57.471 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3051976 ']' 00:30:57.471 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3051976 00:30:57.471 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3051976 ']' 00:30:57.471 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3051976 00:30:57.471 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:57.471 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:57.471 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3051976 00:30:57.730 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:57.730 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:57.730 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3051976' 00:30:57.730 killing process with pid 3051976 00:30:57.730 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3051976 00:30:57.730 13:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3051976 00:30:57.730 13:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:57.730 13:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:57.730 13:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:57.730 13:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:57.730 13:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:57.730 13:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:57.730 13:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:57.731 13:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:57.731 13:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:57.731 13:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.731 13:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:57.731 13:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:00.267 00:31:00.267 real 0m19.640s 00:31:00.267 user 0m22.571s 00:31:00.267 sys 0m6.350s 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:00.267 ************************************ 00:31:00.267 END TEST nvmf_queue_depth 00:31:00.267 ************************************ 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:00.267 ************************************ 00:31:00.267 START TEST nvmf_target_multipath 00:31:00.267 ************************************ 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:00.267 * Looking for test storage... 00:31:00.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:00.267 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:00.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:00.268 --rc genhtml_branch_coverage=1 00:31:00.268 --rc genhtml_function_coverage=1 00:31:00.268 --rc genhtml_legend=1 00:31:00.268 --rc geninfo_all_blocks=1 00:31:00.268 --rc geninfo_unexecuted_blocks=1 00:31:00.268 00:31:00.268 ' 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:00.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:00.268 --rc genhtml_branch_coverage=1 00:31:00.268 --rc genhtml_function_coverage=1 00:31:00.268 --rc genhtml_legend=1 00:31:00.268 --rc geninfo_all_blocks=1 00:31:00.268 --rc geninfo_unexecuted_blocks=1 00:31:00.268 00:31:00.268 ' 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:00.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:00.268 --rc genhtml_branch_coverage=1 00:31:00.268 --rc genhtml_function_coverage=1 00:31:00.268 --rc genhtml_legend=1 00:31:00.268 --rc geninfo_all_blocks=1 00:31:00.268 --rc geninfo_unexecuted_blocks=1 00:31:00.268 00:31:00.268 ' 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:00.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:00.268 --rc genhtml_branch_coverage=1 00:31:00.268 --rc genhtml_function_coverage=1 00:31:00.268 --rc genhtml_legend=1 00:31:00.268 --rc geninfo_all_blocks=1 00:31:00.268 --rc geninfo_unexecuted_blocks=1 00:31:00.268 00:31:00.268 ' 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:31:00.268 13:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:06.844 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:06.844 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:06.844 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:06.845 Found net devices under 0000:86:00.0: cvl_0_0 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:06.845 Found net devices under 0000:86:00.1: cvl_0_1 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:06.845 13:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:06.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:06.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.420 ms 00:31:06.845 00:31:06.845 --- 10.0.0.2 ping statistics --- 00:31:06.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.845 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:06.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:06.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:31:06.845 00:31:06.845 --- 10.0.0.1 ping statistics --- 00:31:06.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.845 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:06.845 only one NIC for nvmf test 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:06.845 rmmod nvme_tcp 00:31:06.845 rmmod nvme_fabrics 00:31:06.845 rmmod nvme_keyring 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:06.845 13:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:08.223 00:31:08.223 real 0m8.263s 00:31:08.223 user 0m1.859s 00:31:08.223 sys 0m4.421s 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:08.223 ************************************ 00:31:08.223 END TEST nvmf_target_multipath 00:31:08.223 ************************************ 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:08.223 ************************************ 00:31:08.223 START TEST nvmf_zcopy 00:31:08.223 ************************************ 00:31:08.223 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:08.483 * Looking for test storage... 00:31:08.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:08.483 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:08.483 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:31:08.483 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:08.483 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:08.483 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:08.483 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:08.483 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:08.483 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:08.483 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:08.483 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:08.483 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:08.483 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:08.483 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:08.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.484 --rc genhtml_branch_coverage=1 00:31:08.484 --rc genhtml_function_coverage=1 00:31:08.484 --rc genhtml_legend=1 00:31:08.484 --rc geninfo_all_blocks=1 00:31:08.484 --rc geninfo_unexecuted_blocks=1 00:31:08.484 00:31:08.484 ' 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:08.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.484 --rc genhtml_branch_coverage=1 00:31:08.484 --rc genhtml_function_coverage=1 00:31:08.484 --rc genhtml_legend=1 00:31:08.484 --rc geninfo_all_blocks=1 00:31:08.484 --rc geninfo_unexecuted_blocks=1 00:31:08.484 00:31:08.484 ' 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:08.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.484 --rc genhtml_branch_coverage=1 00:31:08.484 --rc genhtml_function_coverage=1 00:31:08.484 --rc genhtml_legend=1 00:31:08.484 --rc geninfo_all_blocks=1 00:31:08.484 --rc geninfo_unexecuted_blocks=1 00:31:08.484 00:31:08.484 ' 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:08.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.484 --rc genhtml_branch_coverage=1 00:31:08.484 --rc genhtml_function_coverage=1 00:31:08.484 --rc genhtml_legend=1 00:31:08.484 --rc geninfo_all_blocks=1 00:31:08.484 --rc geninfo_unexecuted_blocks=1 00:31:08.484 00:31:08.484 ' 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:08.484 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:08.485 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:08.485 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:08.485 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:08.485 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:08.485 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:08.485 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:08.485 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:08.485 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.485 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.485 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.485 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:08.485 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:08.485 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:08.485 13:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:15.056 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:15.056 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:31:15.056 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:15.056 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:15.056 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:15.057 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:15.057 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:15.057 Found net devices under 0000:86:00.0: cvl_0_0 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:15.057 Found net devices under 0000:86:00.1: cvl_0_1 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:15.057 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:15.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:15.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:31:15.058 00:31:15.058 --- 10.0.0.2 ping statistics --- 00:31:15.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.058 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:15.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:15.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:31:15.058 00:31:15.058 --- 10.0.0.1 ping statistics --- 00:31:15.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.058 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3060646 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3060646 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3060646 ']' 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:15.058 [2024-11-19 13:23:17.669865] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:15.058 [2024-11-19 13:23:17.670855] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:31:15.058 [2024-11-19 13:23:17.670896] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:15.058 [2024-11-19 13:23:17.751331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:15.058 [2024-11-19 13:23:17.792173] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:15.058 [2024-11-19 13:23:17.792209] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:15.058 [2024-11-19 13:23:17.792216] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:15.058 [2024-11-19 13:23:17.792222] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:15.058 [2024-11-19 13:23:17.792227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:15.058 [2024-11-19 13:23:17.792751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.058 [2024-11-19 13:23:17.859702] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:15.058 [2024-11-19 13:23:17.859922] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:15.058 [2024-11-19 13:23:17.925433] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:15.058 [2024-11-19 13:23:17.949629] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:15.058 malloc0 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:15.058 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.059 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:15.059 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.059 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:31:15.059 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:31:15.059 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:15.059 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:15.059 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:15.059 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:15.059 { 00:31:15.059 "params": { 00:31:15.059 "name": "Nvme$subsystem", 00:31:15.059 "trtype": "$TEST_TRANSPORT", 00:31:15.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:15.059 "adrfam": "ipv4", 00:31:15.059 "trsvcid": "$NVMF_PORT", 00:31:15.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:15.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:15.059 "hdgst": ${hdgst:-false}, 00:31:15.059 "ddgst": ${ddgst:-false} 00:31:15.059 }, 00:31:15.059 "method": "bdev_nvme_attach_controller" 00:31:15.059 } 00:31:15.059 EOF 00:31:15.059 )") 00:31:15.059 13:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:15.059 13:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:15.059 13:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:15.059 13:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:15.059 "params": { 00:31:15.059 "name": "Nvme1", 00:31:15.059 "trtype": "tcp", 00:31:15.059 "traddr": "10.0.0.2", 00:31:15.059 "adrfam": "ipv4", 00:31:15.059 "trsvcid": "4420", 00:31:15.059 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:15.059 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:15.059 "hdgst": false, 00:31:15.059 "ddgst": false 00:31:15.059 }, 00:31:15.059 "method": "bdev_nvme_attach_controller" 00:31:15.059 }' 00:31:15.059 [2024-11-19 13:23:18.045231] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:31:15.059 [2024-11-19 13:23:18.045284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3060672 ] 00:31:15.059 [2024-11-19 13:23:18.122141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:15.059 [2024-11-19 13:23:18.163639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.059 Running I/O for 10 seconds... 00:31:17.375 8264.00 IOPS, 64.56 MiB/s [2024-11-19T12:23:21.688Z] 8292.00 IOPS, 64.78 MiB/s [2024-11-19T12:23:22.627Z] 8333.33 IOPS, 65.10 MiB/s [2024-11-19T12:23:23.564Z] 8357.50 IOPS, 65.29 MiB/s [2024-11-19T12:23:24.502Z] 8374.00 IOPS, 65.42 MiB/s [2024-11-19T12:23:25.440Z] 8383.83 IOPS, 65.50 MiB/s [2024-11-19T12:23:26.407Z] 8390.71 IOPS, 65.55 MiB/s [2024-11-19T12:23:27.784Z] 8394.88 IOPS, 65.58 MiB/s [2024-11-19T12:23:28.721Z] 8397.22 IOPS, 65.60 MiB/s [2024-11-19T12:23:28.721Z] 8402.20 IOPS, 65.64 MiB/s 00:31:25.344 Latency(us) 00:31:25.344 [2024-11-19T12:23:28.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.344 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:25.344 Verification LBA range: start 0x0 length 0x1000 00:31:25.344 Nvme1n1 : 10.01 8404.89 65.66 0.00 0.00 15186.31 1040.03 21427.42 00:31:25.344 [2024-11-19T12:23:28.721Z] =================================================================================================================== 00:31:25.344 [2024-11-19T12:23:28.721Z] Total : 8404.89 65.66 0.00 0.00 15186.31 1040.03 21427.42 00:31:25.344 13:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3062403 00:31:25.344 13:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:25.344 13:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:25.344 13:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:25.344 13:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:25.344 13:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:25.344 13:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:25.344 13:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:25.344 13:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:25.344 { 00:31:25.344 "params": { 00:31:25.344 "name": "Nvme$subsystem", 00:31:25.344 "trtype": "$TEST_TRANSPORT", 00:31:25.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.344 "adrfam": "ipv4", 00:31:25.344 "trsvcid": "$NVMF_PORT", 00:31:25.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.344 "hdgst": ${hdgst:-false}, 00:31:25.344 "ddgst": ${ddgst:-false} 00:31:25.344 }, 00:31:25.344 "method": "bdev_nvme_attach_controller" 00:31:25.344 } 00:31:25.344 EOF 00:31:25.344 )") 00:31:25.344 [2024-11-19 13:23:28.553093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.344 [2024-11-19 13:23:28.553124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.344 13:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:25.344 13:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:25.344 13:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:25.344 13:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:25.344 "params": { 00:31:25.344 "name": "Nvme1", 00:31:25.344 "trtype": "tcp", 00:31:25.344 "traddr": "10.0.0.2", 00:31:25.344 "adrfam": "ipv4", 00:31:25.344 "trsvcid": "4420", 00:31:25.344 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:25.344 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:25.344 "hdgst": false, 00:31:25.344 "ddgst": false 00:31:25.344 }, 00:31:25.344 "method": "bdev_nvme_attach_controller" 00:31:25.344 }' 00:31:25.344 [2024-11-19 13:23:28.565064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.344 [2024-11-19 13:23:28.565077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.344 [2024-11-19 13:23:28.577059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.344 [2024-11-19 13:23:28.577070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.344 [2024-11-19 13:23:28.589059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.344 [2024-11-19 13:23:28.589070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.344 [2024-11-19 13:23:28.594306] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:31:25.344 [2024-11-19 13:23:28.594347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3062403 ] 00:31:25.344 [2024-11-19 13:23:28.601059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.344 [2024-11-19 13:23:28.601070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.344 [2024-11-19 13:23:28.613056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.344 [2024-11-19 13:23:28.613067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.344 [2024-11-19 13:23:28.625058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.344 [2024-11-19 13:23:28.625069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.344 [2024-11-19 13:23:28.637059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.344 [2024-11-19 13:23:28.637070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.344 [2024-11-19 13:23:28.649060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.344 [2024-11-19 13:23:28.649070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.344 [2024-11-19 13:23:28.661058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.344 [2024-11-19 13:23:28.661068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.344 [2024-11-19 13:23:28.671388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.344 [2024-11-19 13:23:28.673057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.344 [2024-11-19 13:23:28.673067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.344 [2024-11-19 13:23:28.685062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.344 [2024-11-19 13:23:28.685076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.344 [2024-11-19 13:23:28.697060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.344 [2024-11-19 13:23:28.697076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.344 [2024-11-19 13:23:28.709062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.344 [2024-11-19 13:23:28.709076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.344 [2024-11-19 13:23:28.713703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.604 [2024-11-19 13:23:28.721059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.604 [2024-11-19 13:23:28.721071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.604 [2024-11-19 13:23:28.733070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.604 [2024-11-19 13:23:28.733090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.604 [2024-11-19 13:23:28.745067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.604 [2024-11-19 13:23:28.745081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.604 [2024-11-19 13:23:28.757060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.604 [2024-11-19 13:23:28.757075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.604 [2024-11-19 13:23:28.769061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.604 [2024-11-19 13:23:28.769071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.604 [2024-11-19 13:23:28.781060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.604 [2024-11-19 13:23:28.781070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.604 [2024-11-19 13:23:28.793061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.604 [2024-11-19 13:23:28.793075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.604 [2024-11-19 13:23:28.805163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.604 [2024-11-19 13:23:28.805184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.604 [2024-11-19 13:23:28.817066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.604 [2024-11-19 13:23:28.817081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.604 [2024-11-19 13:23:28.829066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.604 [2024-11-19 13:23:28.829081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.604 [2024-11-19 13:23:28.841061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.604 [2024-11-19 13:23:28.841071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.604 [2024-11-19 13:23:28.853060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.604 [2024-11-19 13:23:28.853070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.604 [2024-11-19 13:23:28.865065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.604 [2024-11-19 13:23:28.865078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.604 [2024-11-19 13:23:28.877064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.604 [2024-11-19 13:23:28.877078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.604 [2024-11-19 13:23:28.889064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.604 [2024-11-19 13:23:28.889078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.604 [2024-11-19 13:23:28.932199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.604 [2024-11-19 13:23:28.932217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.604 [2024-11-19 13:23:28.941064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.604 [2024-11-19 13:23:28.941075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.604 Running I/O for 5 seconds... 00:31:25.604 [2024-11-19 13:23:28.955498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.604 [2024-11-19 13:23:28.955518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.604 [2024-11-19 13:23:28.970610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.604 [2024-11-19 13:23:28.970630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.864 [2024-11-19 13:23:28.985731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.864 [2024-11-19 13:23:28.985751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.864 [2024-11-19 13:23:29.001049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.864 [2024-11-19 13:23:29.001070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.864 [2024-11-19 13:23:29.013844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.864 [2024-11-19 13:23:29.013869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.864 [2024-11-19 13:23:29.028688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.864 [2024-11-19 13:23:29.028708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.864 [2024-11-19 13:23:29.042725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.864 [2024-11-19 13:23:29.042744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.864 [2024-11-19 13:23:29.058038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.864 [2024-11-19 13:23:29.058057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.864 [2024-11-19 13:23:29.073327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.864 [2024-11-19 13:23:29.073347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.864 [2024-11-19 13:23:29.086064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.864 [2024-11-19 13:23:29.086083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.864 [2024-11-19 13:23:29.098663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.864 [2024-11-19 13:23:29.098682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.864 [2024-11-19 13:23:29.114217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.864 [2024-11-19 13:23:29.114237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.864 [2024-11-19 13:23:29.129686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.864 [2024-11-19 13:23:29.129704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.864 [2024-11-19 13:23:29.145201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.864 [2024-11-19 13:23:29.145220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.864 [2024-11-19 13:23:29.158384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.864 [2024-11-19 13:23:29.158402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.864 [2024-11-19 13:23:29.169118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.864 [2024-11-19 13:23:29.169136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.864 [2024-11-19 13:23:29.183081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.864 [2024-11-19 13:23:29.183100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.864 [2024-11-19 13:23:29.198478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.864 [2024-11-19 13:23:29.198498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.864 [2024-11-19 13:23:29.213473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.864 [2024-11-19 13:23:29.213491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:25.864 [2024-11-19 13:23:29.228805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:25.864 [2024-11-19 13:23:29.228824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.124 [2024-11-19 13:23:29.243491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.124 [2024-11-19 13:23:29.243512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.124 [2024-11-19 13:23:29.258439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.124 [2024-11-19 13:23:29.258458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.124 [2024-11-19 13:23:29.273325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.124 [2024-11-19 13:23:29.273344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.124 [2024-11-19 13:23:29.284349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.124 [2024-11-19 13:23:29.284367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.124 [2024-11-19 13:23:29.299000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.124 [2024-11-19 13:23:29.299018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.124 [2024-11-19 13:23:29.314019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.124 [2024-11-19 13:23:29.314037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.124 [2024-11-19 13:23:29.325374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.124 [2024-11-19 13:23:29.325393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.124 [2024-11-19 13:23:29.339152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.124 [2024-11-19 13:23:29.339171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.124 [2024-11-19 13:23:29.354354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.124 [2024-11-19 13:23:29.354372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.124 [2024-11-19 13:23:29.369872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.124 [2024-11-19 13:23:29.369890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.124 [2024-11-19 13:23:29.381493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.124 [2024-11-19 13:23:29.381512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.124 [2024-11-19 13:23:29.395145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.124 [2024-11-19 13:23:29.395164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.124 [2024-11-19 13:23:29.410283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.124 [2024-11-19 13:23:29.410301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.124 [2024-11-19 13:23:29.425151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.124 [2024-11-19 13:23:29.425170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.124 [2024-11-19 13:23:29.435956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.124 [2024-11-19 13:23:29.435990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.124 [2024-11-19 13:23:29.451273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.124 [2024-11-19 13:23:29.451292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.124 [2024-11-19 13:23:29.466739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.124 [2024-11-19 13:23:29.466759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.124 [2024-11-19 13:23:29.481880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.124 [2024-11-19 13:23:29.481904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.124 [2024-11-19 13:23:29.497690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.124 [2024-11-19 13:23:29.497709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.384 [2024-11-19 13:23:29.513203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.384 [2024-11-19 13:23:29.513223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.384 [2024-11-19 13:23:29.524112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.384 [2024-11-19 13:23:29.524131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.384 [2024-11-19 13:23:29.539078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.384 [2024-11-19 13:23:29.539097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.384 [2024-11-19 13:23:29.554318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.384 [2024-11-19 13:23:29.554337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.384 [2024-11-19 13:23:29.569332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.384 [2024-11-19 13:23:29.569351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.384 [2024-11-19 13:23:29.583312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.384 [2024-11-19 13:23:29.583332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.384 [2024-11-19 13:23:29.598780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.384 [2024-11-19 13:23:29.598799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.384 [2024-11-19 13:23:29.613683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.384 [2024-11-19 13:23:29.613701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.384 [2024-11-19 13:23:29.629175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.384 [2024-11-19 13:23:29.629195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.384 [2024-11-19 13:23:29.640676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.384 [2024-11-19 13:23:29.640695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.384 [2024-11-19 13:23:29.655117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.384 [2024-11-19 13:23:29.655136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.384 [2024-11-19 13:23:29.670269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.384 [2024-11-19 13:23:29.670288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.384 [2024-11-19 13:23:29.685541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.384 [2024-11-19 13:23:29.685559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.384 [2024-11-19 13:23:29.697005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.384 [2024-11-19 13:23:29.697024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.384 [2024-11-19 13:23:29.711090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.384 [2024-11-19 13:23:29.711110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.384 [2024-11-19 13:23:29.726769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.384 [2024-11-19 13:23:29.726789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.384 [2024-11-19 13:23:29.741527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.384 [2024-11-19 13:23:29.741546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.384 [2024-11-19 13:23:29.757336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.384 [2024-11-19 13:23:29.757362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.644 [2024-11-19 13:23:29.770940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.644 [2024-11-19 13:23:29.770968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.644 [2024-11-19 13:23:29.786355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.644 [2024-11-19 13:23:29.786375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.644 [2024-11-19 13:23:29.800920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.644 [2024-11-19 13:23:29.800940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.644 [2024-11-19 13:23:29.812676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.644 [2024-11-19 13:23:29.812696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.644 [2024-11-19 13:23:29.827132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.644 [2024-11-19 13:23:29.827152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.644 [2024-11-19 13:23:29.842311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.644 [2024-11-19 13:23:29.842330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.644 [2024-11-19 13:23:29.857223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.644 [2024-11-19 13:23:29.857242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.644 [2024-11-19 13:23:29.869849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.644 [2024-11-19 13:23:29.869868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.644 [2024-11-19 13:23:29.883085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.644 [2024-11-19 13:23:29.883105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.644 [2024-11-19 13:23:29.898342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.644 [2024-11-19 13:23:29.898363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.644 [2024-11-19 13:23:29.913721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.644 [2024-11-19 13:23:29.913740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.644 [2024-11-19 13:23:29.928815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.644 [2024-11-19 13:23:29.928836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.644 [2024-11-19 13:23:29.942058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.644 [2024-11-19 13:23:29.942077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.644 16278.00 IOPS, 127.17 MiB/s [2024-11-19T12:23:30.021Z] [2024-11-19 13:23:29.957308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.644 [2024-11-19 13:23:29.957327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.644 [2024-11-19 13:23:29.970125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.644 [2024-11-19 13:23:29.970145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.644 [2024-11-19 13:23:29.985160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.644 [2024-11-19 13:23:29.985180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.644 [2024-11-19 13:23:29.997916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.644 [2024-11-19 13:23:29.997935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.644 [2024-11-19 13:23:30.013423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.644 [2024-11-19 13:23:30.013443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.903 [2024-11-19 13:23:30.029841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.903 [2024-11-19 13:23:30.029868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.903 [2024-11-19 13:23:30.045210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.903 [2024-11-19 13:23:30.045240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.903 [2024-11-19 13:23:30.057042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.903 [2024-11-19 13:23:30.057063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.903 [2024-11-19 13:23:30.071091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.903 [2024-11-19 13:23:30.071112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.903 [2024-11-19 13:23:30.086158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.903 [2024-11-19 13:23:30.086178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.903 [2024-11-19 13:23:30.103064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.903 [2024-11-19 13:23:30.103085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.903 [2024-11-19 13:23:30.118109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.903 [2024-11-19 13:23:30.118129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.903 [2024-11-19 13:23:30.133197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.903 [2024-11-19 13:23:30.133218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.903 [2024-11-19 13:23:30.144180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.903 [2024-11-19 13:23:30.144198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.903 [2024-11-19 13:23:30.159113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.903 [2024-11-19 13:23:30.159132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.903 [2024-11-19 13:23:30.174349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.903 [2024-11-19 13:23:30.174368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.903 [2024-11-19 13:23:30.189576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.903 [2024-11-19 13:23:30.189595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.903 [2024-11-19 13:23:30.205795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.903 [2024-11-19 13:23:30.205814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.903 [2024-11-19 13:23:30.220966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.903 [2024-11-19 13:23:30.221002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.903 [2024-11-19 13:23:30.232568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.903 [2024-11-19 13:23:30.232587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.903 [2024-11-19 13:23:30.246958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.903 [2024-11-19 13:23:30.246978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.903 [2024-11-19 13:23:30.261932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.903 [2024-11-19 13:23:30.261956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.903 [2024-11-19 13:23:30.277217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:26.903 [2024-11-19 13:23:30.277236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.163 [2024-11-19 13:23:30.288650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.163 [2024-11-19 13:23:30.288670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.163 [2024-11-19 13:23:30.303080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.163 [2024-11-19 13:23:30.303100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.163 [2024-11-19 13:23:30.318600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.163 [2024-11-19 13:23:30.318619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.163 [2024-11-19 13:23:30.333955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.163 [2024-11-19 13:23:30.333974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.163 [2024-11-19 13:23:30.345676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.163 [2024-11-19 13:23:30.345695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.163 [2024-11-19 13:23:30.358729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.163 [2024-11-19 13:23:30.358748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.163 [2024-11-19 13:23:30.374400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.163 [2024-11-19 13:23:30.374420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.163 [2024-11-19 13:23:30.389647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.163 [2024-11-19 13:23:30.389665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.163 [2024-11-19 13:23:30.402141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.163 [2024-11-19 13:23:30.402160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.163 [2024-11-19 13:23:30.417853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.163 [2024-11-19 13:23:30.417872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.163 [2024-11-19 13:23:30.433826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.163 [2024-11-19 13:23:30.433845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.163 [2024-11-19 13:23:30.449176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.163 [2024-11-19 13:23:30.449195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.163 [2024-11-19 13:23:30.460521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.163 [2024-11-19 13:23:30.460540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.163 [2024-11-19 13:23:30.474967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.163 [2024-11-19 13:23:30.474986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.163 [2024-11-19 13:23:30.490588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.163 [2024-11-19 13:23:30.490607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.163 [2024-11-19 13:23:30.505514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.163 [2024-11-19 13:23:30.505533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.163 [2024-11-19 13:23:30.520821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.163 [2024-11-19 13:23:30.520839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.163 [2024-11-19 13:23:30.535389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.163 [2024-11-19 13:23:30.535408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.423 [2024-11-19 13:23:30.550268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.423 [2024-11-19 13:23:30.550288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.423 [2024-11-19 13:23:30.565310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.423 [2024-11-19 13:23:30.565329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.423 [2024-11-19 13:23:30.577743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.423 [2024-11-19 13:23:30.577761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.423 [2024-11-19 13:23:30.590837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.423 [2024-11-19 13:23:30.590857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.423 [2024-11-19 13:23:30.606072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.423 [2024-11-19 13:23:30.606091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.423 [2024-11-19 13:23:30.621400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.423 [2024-11-19 13:23:30.621418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.423 [2024-11-19 13:23:30.637668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.423 [2024-11-19 13:23:30.637687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.423 [2024-11-19 13:23:30.653412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.423 [2024-11-19 13:23:30.653431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.423 [2024-11-19 13:23:30.669366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.423 [2024-11-19 13:23:30.669385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.423 [2024-11-19 13:23:30.685067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.423 [2024-11-19 13:23:30.685086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.423 [2024-11-19 13:23:30.696080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.423 [2024-11-19 13:23:30.696099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.423 [2024-11-19 13:23:30.711256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.423 [2024-11-19 13:23:30.711275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.423 [2024-11-19 13:23:30.726507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.423 [2024-11-19 13:23:30.726526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.423 [2024-11-19 13:23:30.741556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.423 [2024-11-19 13:23:30.741575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.423 [2024-11-19 13:23:30.757501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.423 [2024-11-19 13:23:30.757520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.423 [2024-11-19 13:23:30.772736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.423 [2024-11-19 13:23:30.772756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.423 [2024-11-19 13:23:30.787504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.423 [2024-11-19 13:23:30.787523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.683 [2024-11-19 13:23:30.802144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.683 [2024-11-19 13:23:30.802163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.683 [2024-11-19 13:23:30.817357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.683 [2024-11-19 13:23:30.817376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.683 [2024-11-19 13:23:30.832516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.683 [2024-11-19 13:23:30.832535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.683 [2024-11-19 13:23:30.845805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.683 [2024-11-19 13:23:30.845824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.683 [2024-11-19 13:23:30.858729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.683 [2024-11-19 13:23:30.858748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.683 [2024-11-19 13:23:30.874058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.683 [2024-11-19 13:23:30.874077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.683 [2024-11-19 13:23:30.889234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.683 [2024-11-19 13:23:30.889253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.683 [2024-11-19 13:23:30.899937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.683 [2024-11-19 13:23:30.899960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.683 [2024-11-19 13:23:30.915348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.683 [2024-11-19 13:23:30.915368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.683 [2024-11-19 13:23:30.930609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.683 [2024-11-19 13:23:30.930627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.683 [2024-11-19 13:23:30.945139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.683 [2024-11-19 13:23:30.945158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.683 16256.50 IOPS, 127.00 MiB/s [2024-11-19T12:23:31.060Z] [2024-11-19 13:23:30.958479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.683 [2024-11-19 13:23:30.958499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.683 [2024-11-19 13:23:30.968935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.683 [2024-11-19 13:23:30.968960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.683 [2024-11-19 13:23:30.983224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.683 [2024-11-19 13:23:30.983244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.683 [2024-11-19 13:23:30.998077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.683 [2024-11-19 13:23:30.998097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.683 [2024-11-19 13:23:31.013491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.683 [2024-11-19 13:23:31.013510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.683 [2024-11-19 13:23:31.028817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.683 [2024-11-19 13:23:31.028836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.683 [2024-11-19 13:23:31.041789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.683 [2024-11-19 13:23:31.041808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.683 [2024-11-19 13:23:31.054664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.683 [2024-11-19 13:23:31.054683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.942 [2024-11-19 13:23:31.069683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.942 [2024-11-19 13:23:31.069702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.942 [2024-11-19 13:23:31.085280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.942 [2024-11-19 13:23:31.085301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.942 [2024-11-19 13:23:31.099348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.942 [2024-11-19 13:23:31.099367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.942 [2024-11-19 13:23:31.114220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.942 [2024-11-19 13:23:31.114244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.942 [2024-11-19 13:23:31.129481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.942 [2024-11-19 13:23:31.129501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.942 [2024-11-19 13:23:31.145308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.942 [2024-11-19 13:23:31.145327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.942 [2024-11-19 13:23:31.157145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.942 [2024-11-19 13:23:31.157164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.942 [2024-11-19 13:23:31.170993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.942 [2024-11-19 13:23:31.171012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.942 [2024-11-19 13:23:31.186316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.942 [2024-11-19 13:23:31.186336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.942 [2024-11-19 13:23:31.201635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.942 [2024-11-19 13:23:31.201655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.942 [2024-11-19 13:23:31.216620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.942 [2024-11-19 13:23:31.216640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.942 [2024-11-19 13:23:31.229914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.942 [2024-11-19 13:23:31.229932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.942 [2024-11-19 13:23:31.245200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.943 [2024-11-19 13:23:31.245220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.943 [2024-11-19 13:23:31.257903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.943 [2024-11-19 13:23:31.257922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.943 [2024-11-19 13:23:31.270801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.943 [2024-11-19 13:23:31.270820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.943 [2024-11-19 13:23:31.286141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.943 [2024-11-19 13:23:31.286161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.943 [2024-11-19 13:23:31.300918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.943 [2024-11-19 13:23:31.300938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.943 [2024-11-19 13:23:31.314132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:27.943 [2024-11-19 13:23:31.314151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.202 [2024-11-19 13:23:31.329363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.202 [2024-11-19 13:23:31.329382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.202 [2024-11-19 13:23:31.345155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.202 [2024-11-19 13:23:31.345174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.202 [2024-11-19 13:23:31.357775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.202 [2024-11-19 13:23:31.357794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.202 [2024-11-19 13:23:31.373163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.202 [2024-11-19 13:23:31.373183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.202 [2024-11-19 13:23:31.387939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.202 [2024-11-19 13:23:31.387972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.202 [2024-11-19 13:23:31.403316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.202 [2024-11-19 13:23:31.403336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.202 [2024-11-19 13:23:31.418312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.202 [2024-11-19 13:23:31.418332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.202 [2024-11-19 13:23:31.433586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.202 [2024-11-19 13:23:31.433604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.202 [2024-11-19 13:23:31.445240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.202 [2024-11-19 13:23:31.445258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.202 [2024-11-19 13:23:31.459215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.202 [2024-11-19 13:23:31.459233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.202 [2024-11-19 13:23:31.474519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.202 [2024-11-19 13:23:31.474538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.202 [2024-11-19 13:23:31.489511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.202 [2024-11-19 13:23:31.489530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.202 [2024-11-19 13:23:31.504925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.202 [2024-11-19 13:23:31.504944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.202 [2024-11-19 13:23:31.519671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.202 [2024-11-19 13:23:31.519691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.202 [2024-11-19 13:23:31.535078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.202 [2024-11-19 13:23:31.535097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.202 [2024-11-19 13:23:31.550232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.202 [2024-11-19 13:23:31.550251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.202 [2024-11-19 13:23:31.561562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.202 [2024-11-19 13:23:31.561581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.202 [2024-11-19 13:23:31.573177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.202 [2024-11-19 13:23:31.573195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.462 [2024-11-19 13:23:31.587191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.462 [2024-11-19 13:23:31.587222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.462 [2024-11-19 13:23:31.602774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.462 [2024-11-19 13:23:31.602793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.462 [2024-11-19 13:23:31.617684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.462 [2024-11-19 13:23:31.617703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.462 [2024-11-19 13:23:31.633528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.462 [2024-11-19 13:23:31.633546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.462 [2024-11-19 13:23:31.646029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.462 [2024-11-19 13:23:31.646047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.462 [2024-11-19 13:23:31.661422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.462 [2024-11-19 13:23:31.661445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.462 [2024-11-19 13:23:31.677366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.462 [2024-11-19 13:23:31.677384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.462 [2024-11-19 13:23:31.693159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.462 [2024-11-19 13:23:31.693177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.462 [2024-11-19 13:23:31.705729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.462 [2024-11-19 13:23:31.705749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.462 [2024-11-19 13:23:31.718794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.462 [2024-11-19 13:23:31.718813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.462 [2024-11-19 13:23:31.734527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.462 [2024-11-19 13:23:31.734546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.462 [2024-11-19 13:23:31.749727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.462 [2024-11-19 13:23:31.749746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.462 [2024-11-19 13:23:31.765228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.462 [2024-11-19 13:23:31.765246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.462 [2024-11-19 13:23:31.779179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.462 [2024-11-19 13:23:31.779198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.462 [2024-11-19 13:23:31.794453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.462 [2024-11-19 13:23:31.794472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.462 [2024-11-19 13:23:31.809759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.462 [2024-11-19 13:23:31.809778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.462 [2024-11-19 13:23:31.825614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.462 [2024-11-19 13:23:31.825632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.722 [2024-11-19 13:23:31.841166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.722 [2024-11-19 13:23:31.841187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.722 [2024-11-19 13:23:31.855029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.722 [2024-11-19 13:23:31.855048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.722 [2024-11-19 13:23:31.870096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.722 [2024-11-19 13:23:31.870116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.722 [2024-11-19 13:23:31.885512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.722 [2024-11-19 13:23:31.885531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.722 [2024-11-19 13:23:31.896522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.722 [2024-11-19 13:23:31.896541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.722 [2024-11-19 13:23:31.911326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.722 [2024-11-19 13:23:31.911346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.722 [2024-11-19 13:23:31.926908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.722 [2024-11-19 13:23:31.926928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.722 [2024-11-19 13:23:31.942019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.722 [2024-11-19 13:23:31.942043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.722 16245.00 IOPS, 126.91 MiB/s [2024-11-19T12:23:32.099Z] [2024-11-19 13:23:31.956981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.722 [2024-11-19 13:23:31.957000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.722 [2024-11-19 13:23:31.970893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.722 [2024-11-19 13:23:31.970911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.722 [2024-11-19 13:23:31.985812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.722 [2024-11-19 13:23:31.985831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.722 [2024-11-19 13:23:32.001257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.722 [2024-11-19 13:23:32.001275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.722 [2024-11-19 13:23:32.014336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.722 [2024-11-19 13:23:32.014355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.722 [2024-11-19 13:23:32.029803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.722 [2024-11-19 13:23:32.029823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.722 [2024-11-19 13:23:32.044885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.722 [2024-11-19 13:23:32.044904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.722 [2024-11-19 13:23:32.056390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.722 [2024-11-19 13:23:32.056409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.722 [2024-11-19 13:23:32.071682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.722 [2024-11-19 13:23:32.071702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.722 [2024-11-19 13:23:32.086908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.722 [2024-11-19 13:23:32.086926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.982 [2024-11-19 13:23:32.102362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.982 [2024-11-19 13:23:32.102381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.982 [2024-11-19 13:23:32.117444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.982 [2024-11-19 13:23:32.117462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.982 [2024-11-19 13:23:32.130376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.982 [2024-11-19 13:23:32.130395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.982 [2024-11-19 13:23:32.141142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.982 [2024-11-19 13:23:32.141161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.982 [2024-11-19 13:23:32.155183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.982 [2024-11-19 13:23:32.155201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.982 [2024-11-19 13:23:32.170617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.982 [2024-11-19 13:23:32.170636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.982 [2024-11-19 13:23:32.185965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.982 [2024-11-19 13:23:32.185983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.982 [2024-11-19 13:23:32.201278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.982 [2024-11-19 13:23:32.201298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.982 [2024-11-19 13:23:32.213774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.982 [2024-11-19 13:23:32.213794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.982 [2024-11-19 13:23:32.226456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.982 [2024-11-19 13:23:32.226475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.982 [2024-11-19 13:23:32.237011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.982 [2024-11-19 13:23:32.237030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.982 [2024-11-19 13:23:32.250929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.982 [2024-11-19 13:23:32.250954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.982 [2024-11-19 13:23:32.266105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.982 [2024-11-19 13:23:32.266123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.982 [2024-11-19 13:23:32.276146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.982 [2024-11-19 13:23:32.276165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.982 [2024-11-19 13:23:32.291249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.982 [2024-11-19 13:23:32.291268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.982 [2024-11-19 13:23:32.306320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.982 [2024-11-19 13:23:32.306338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.982 [2024-11-19 13:23:32.321433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.982 [2024-11-19 13:23:32.321452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.982 [2024-11-19 13:23:32.336763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.982 [2024-11-19 13:23:32.336783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.982 [2024-11-19 13:23:32.348493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:28.982 [2024-11-19 13:23:32.348513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.241 [2024-11-19 13:23:32.362955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.241 [2024-11-19 13:23:32.362976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.241 [2024-11-19 13:23:32.377753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.241 [2024-11-19 13:23:32.377771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.241 [2024-11-19 13:23:32.392591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.241 [2024-11-19 13:23:32.392609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.241 [2024-11-19 13:23:32.406401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.241 [2024-11-19 13:23:32.406419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.241 [2024-11-19 13:23:32.417235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.241 [2024-11-19 13:23:32.417253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.241 [2024-11-19 13:23:32.430728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.241 [2024-11-19 13:23:32.430748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.241 [2024-11-19 13:23:32.446168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.241 [2024-11-19 13:23:32.446187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.241 [2024-11-19 13:23:32.461881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.241 [2024-11-19 13:23:32.461899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.241 [2024-11-19 13:23:32.477115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.242 [2024-11-19 13:23:32.477134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.242 [2024-11-19 13:23:32.488916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.242 [2024-11-19 13:23:32.488935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.242 [2024-11-19 13:23:32.503175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.242 [2024-11-19 13:23:32.503194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.242 [2024-11-19 13:23:32.518276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.242 [2024-11-19 13:23:32.518295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.242 [2024-11-19 13:23:32.530933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.242 [2024-11-19 13:23:32.530957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.242 [2024-11-19 13:23:32.546087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.242 [2024-11-19 13:23:32.546106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.242 [2024-11-19 13:23:32.560906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.242 [2024-11-19 13:23:32.560924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.242 [2024-11-19 13:23:32.572584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.242 [2024-11-19 13:23:32.572603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.242 [2024-11-19 13:23:32.586829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.242 [2024-11-19 13:23:32.586847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.242 [2024-11-19 13:23:32.602294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.242 [2024-11-19 13:23:32.602313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.502 [2024-11-19 13:23:32.617346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.502 [2024-11-19 13:23:32.617366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.502 [2024-11-19 13:23:32.628056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.502 [2024-11-19 13:23:32.628075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.502 [2024-11-19 13:23:32.642972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.502 [2024-11-19 13:23:32.642992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.502 [2024-11-19 13:23:32.658296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.502 [2024-11-19 13:23:32.658316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.502 [2024-11-19 13:23:32.673462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.502 [2024-11-19 13:23:32.673481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.502 [2024-11-19 13:23:32.688750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.502 [2024-11-19 13:23:32.688770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.502 [2024-11-19 13:23:32.703004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.502 [2024-11-19 13:23:32.703023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.502 [2024-11-19 13:23:32.718285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.502 [2024-11-19 13:23:32.718304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.502 [2024-11-19 13:23:32.733411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.502 [2024-11-19 13:23:32.733435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.502 [2024-11-19 13:23:32.748913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.502 [2024-11-19 13:23:32.748932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.502 [2024-11-19 13:23:32.763622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.502 [2024-11-19 13:23:32.763641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.502 [2024-11-19 13:23:32.779219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.502 [2024-11-19 13:23:32.779238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.502 [2024-11-19 13:23:32.794503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.502 [2024-11-19 13:23:32.794521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.502 [2024-11-19 13:23:32.810187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.502 [2024-11-19 13:23:32.810206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.502 [2024-11-19 13:23:32.825862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.502 [2024-11-19 13:23:32.825881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.502 [2024-11-19 13:23:32.841391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.502 [2024-11-19 13:23:32.841410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.502 [2024-11-19 13:23:32.853537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.502 [2024-11-19 13:23:32.853556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.502 [2024-11-19 13:23:32.866818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.502 [2024-11-19 13:23:32.866837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.762 [2024-11-19 13:23:32.882213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.762 [2024-11-19 13:23:32.882232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.762 [2024-11-19 13:23:32.897440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.762 [2024-11-19 13:23:32.897459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.762 [2024-11-19 13:23:32.913419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.762 [2024-11-19 13:23:32.913438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.762 [2024-11-19 13:23:32.925287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.762 [2024-11-19 13:23:32.925307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.762 [2024-11-19 13:23:32.939098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.762 [2024-11-19 13:23:32.939117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.762 [2024-11-19 13:23:32.954149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.762 [2024-11-19 13:23:32.954168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.762 16233.75 IOPS, 126.83 MiB/s [2024-11-19T12:23:33.139Z] [2024-11-19 13:23:32.969046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.762 [2024-11-19 13:23:32.969065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.762 [2024-11-19 13:23:32.979672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.762 [2024-11-19 13:23:32.979691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.762 [2024-11-19 13:23:32.994837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.762 [2024-11-19 13:23:32.994856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.762 [2024-11-19 13:23:33.009780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.762 [2024-11-19 13:23:33.009803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.762 [2024-11-19 13:23:33.024890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.762 [2024-11-19 13:23:33.024909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.762 [2024-11-19 13:23:33.038011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.762 [2024-11-19 13:23:33.038031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.762 [2024-11-19 13:23:33.053208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.762 [2024-11-19 13:23:33.053229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.762 [2024-11-19 13:23:33.066993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.762 [2024-11-19 13:23:33.067013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.762 [2024-11-19 13:23:33.081829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.762 [2024-11-19 13:23:33.081847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.762 [2024-11-19 13:23:33.096843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.762 [2024-11-19 13:23:33.096862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.762 [2024-11-19 13:23:33.110332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.762 [2024-11-19 13:23:33.110351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:29.762 [2024-11-19 13:23:33.125378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:29.762 [2024-11-19 13:23:33.125397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.022 [2024-11-19 13:23:33.141205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.022 [2024-11-19 13:23:33.141226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.022 [2024-11-19 13:23:33.155022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.022 [2024-11-19 13:23:33.155041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.022 [2024-11-19 13:23:33.169860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.022 [2024-11-19 13:23:33.169879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.022 [2024-11-19 13:23:33.185222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.022 [2024-11-19 13:23:33.185241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.022 [2024-11-19 13:23:33.199109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.022 [2024-11-19 13:23:33.199127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.022 [2024-11-19 13:23:33.214491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.022 [2024-11-19 13:23:33.214509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.022 [2024-11-19 13:23:33.229466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.022 [2024-11-19 13:23:33.229484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.022 [2024-11-19 13:23:33.245230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.022 [2024-11-19 13:23:33.245248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.022 [2024-11-19 13:23:33.258694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.022 [2024-11-19 13:23:33.258714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.022 [2024-11-19 13:23:33.273706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.022 [2024-11-19 13:23:33.273726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.022 [2024-11-19 13:23:33.289475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.022 [2024-11-19 13:23:33.289503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.022 [2024-11-19 13:23:33.304792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.022 [2024-11-19 13:23:33.304811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.022 [2024-11-19 13:23:33.316472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.022 [2024-11-19 13:23:33.316491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.022 [2024-11-19 13:23:33.330913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.023 [2024-11-19 13:23:33.330932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.023 [2024-11-19 13:23:33.345683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.023 [2024-11-19 13:23:33.345702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.023 [2024-11-19 13:23:33.361423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.023 [2024-11-19 13:23:33.361443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.023 [2024-11-19 13:23:33.374815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.023 [2024-11-19 13:23:33.374834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.023 [2024-11-19 13:23:33.390031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.023 [2024-11-19 13:23:33.390050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.282 [2024-11-19 13:23:33.405049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.282 [2024-11-19 13:23:33.405068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.282 [2024-11-19 13:23:33.416468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.282 [2024-11-19 13:23:33.416488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.282 [2024-11-19 13:23:33.431025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.282 [2024-11-19 13:23:33.431044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.282 [2024-11-19 13:23:33.446298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.282 [2024-11-19 13:23:33.446317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.282 [2024-11-19 13:23:33.461130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.282 [2024-11-19 13:23:33.461149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.282 [2024-11-19 13:23:33.472657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.282 [2024-11-19 13:23:33.472676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.282 [2024-11-19 13:23:33.487119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.282 [2024-11-19 13:23:33.487138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.282 [2024-11-19 13:23:33.502217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.282 [2024-11-19 13:23:33.502235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.282 [2024-11-19 13:23:33.517721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.282 [2024-11-19 13:23:33.517739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.282 [2024-11-19 13:23:33.533382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.282 [2024-11-19 13:23:33.533401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.282 [2024-11-19 13:23:33.547216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.282 [2024-11-19 13:23:33.547234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.282 [2024-11-19 13:23:33.562933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.282 [2024-11-19 13:23:33.562957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.282 [2024-11-19 13:23:33.578241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.282 [2024-11-19 13:23:33.578260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.282 [2024-11-19 13:23:33.592806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.282 [2024-11-19 13:23:33.592824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.282 [2024-11-19 13:23:33.606211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.282 [2024-11-19 13:23:33.606230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.282 [2024-11-19 13:23:33.617587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.282 [2024-11-19 13:23:33.617605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.282 [2024-11-19 13:23:33.630394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.282 [2024-11-19 13:23:33.630413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.282 [2024-11-19 13:23:33.641501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.282 [2024-11-19 13:23:33.641518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.282 [2024-11-19 13:23:33.655243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.282 [2024-11-19 13:23:33.655261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.541 [2024-11-19 13:23:33.670718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.541 [2024-11-19 13:23:33.670738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.541 [2024-11-19 13:23:33.685739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.541 [2024-11-19 13:23:33.685757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.541 [2024-11-19 13:23:33.701131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.541 [2024-11-19 13:23:33.701150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.541 [2024-11-19 13:23:33.711610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.541 [2024-11-19 13:23:33.711629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.541 [2024-11-19 13:23:33.726965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.541 [2024-11-19 13:23:33.726999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.541 [2024-11-19 13:23:33.742403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.541 [2024-11-19 13:23:33.742422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.541 [2024-11-19 13:23:33.757496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.541 [2024-11-19 13:23:33.757514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.541 [2024-11-19 13:23:33.773871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.541 [2024-11-19 13:23:33.773889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.541 [2024-11-19 13:23:33.789091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.541 [2024-11-19 13:23:33.789110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.541 [2024-11-19 13:23:33.800575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.541 [2024-11-19 13:23:33.800593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.541 [2024-11-19 13:23:33.814960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.541 [2024-11-19 13:23:33.814978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.541 [2024-11-19 13:23:33.830343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.541 [2024-11-19 13:23:33.830362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.541 [2024-11-19 13:23:33.845684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.541 [2024-11-19 13:23:33.845703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.541 [2024-11-19 13:23:33.861134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.541 [2024-11-19 13:23:33.861154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.541 [2024-11-19 13:23:33.873940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.541 [2024-11-19 13:23:33.873964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.541 [2024-11-19 13:23:33.889165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.541 [2024-11-19 13:23:33.889183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.541 [2024-11-19 13:23:33.900646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.541 [2024-11-19 13:23:33.900664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.541 [2024-11-19 13:23:33.915064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.541 [2024-11-19 13:23:33.915083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.800 [2024-11-19 13:23:33.930215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.801 [2024-11-19 13:23:33.930235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.801 [2024-11-19 13:23:33.945451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.801 [2024-11-19 13:23:33.945469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.801 [2024-11-19 13:23:33.958496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.801 [2024-11-19 13:23:33.958515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.801 16266.00 IOPS, 127.08 MiB/s 00:31:30.801 Latency(us) 00:31:30.801 [2024-11-19T12:23:34.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.801 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:30.801 Nvme1n1 : 5.01 16270.58 127.11 0.00 0.00 7860.13 2080.06 13848.04 00:31:30.801 [2024-11-19T12:23:34.178Z] =================================================================================================================== 00:31:30.801 [2024-11-19T12:23:34.178Z] Total : 16270.58 127.11 0.00 0.00 7860.13 2080.06 13848.04 00:31:30.801 [2024-11-19 13:23:33.969066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.801 [2024-11-19 13:23:33.969084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.801 [2024-11-19 13:23:33.981063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.801 [2024-11-19 13:23:33.981080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.801 [2024-11-19 13:23:33.993080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.801 [2024-11-19 13:23:33.993098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.801 [2024-11-19 13:23:34.005069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.801 [2024-11-19 13:23:34.005088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.801 [2024-11-19 13:23:34.017067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.801 [2024-11-19 13:23:34.017081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.801 [2024-11-19 13:23:34.029063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.801 [2024-11-19 13:23:34.029082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.801 [2024-11-19 13:23:34.041061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.801 [2024-11-19 13:23:34.041076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.801 [2024-11-19 13:23:34.053060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.801 [2024-11-19 13:23:34.053074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.801 [2024-11-19 13:23:34.065062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.801 [2024-11-19 13:23:34.065078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.801 [2024-11-19 13:23:34.077059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.801 [2024-11-19 13:23:34.077072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.801 [2024-11-19 13:23:34.089057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.801 [2024-11-19 13:23:34.089068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.801 [2024-11-19 13:23:34.101066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.801 [2024-11-19 13:23:34.101078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.801 [2024-11-19 13:23:34.113057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.801 [2024-11-19 13:23:34.113067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.801 [2024-11-19 13:23:34.125061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:30.801 [2024-11-19 13:23:34.125071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:30.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3062403) - No such process 00:31:30.801 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3062403 00:31:30.801 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.801 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.801 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:30.801 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.801 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:30.801 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.801 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:30.801 delay0 00:31:30.801 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.801 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:30.801 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.801 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:30.801 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.801 13:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:31.060 [2024-11-19 13:23:34.275103] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:37.627 Initializing NVMe Controllers 00:31:37.628 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:37.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:37.628 Initialization complete. Launching workers. 00:31:37.628 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 264, failed: 15593 00:31:37.628 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 15751, failed to submit 106 00:31:37.628 success 15685, unsuccessful 66, failed 0 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:37.628 rmmod nvme_tcp 00:31:37.628 rmmod nvme_fabrics 00:31:37.628 rmmod nvme_keyring 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3060646 ']' 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3060646 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3060646 ']' 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3060646 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3060646 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3060646' 00:31:37.628 killing process with pid 3060646 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3060646 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3060646 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:37.628 13:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.330 13:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:40.330 00:31:40.330 real 0m31.455s 00:31:40.330 user 0m40.581s 00:31:40.330 sys 0m12.468s 00:31:40.330 13:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:40.330 13:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:40.330 ************************************ 00:31:40.330 END TEST nvmf_zcopy 00:31:40.330 ************************************ 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:40.330 ************************************ 00:31:40.330 START TEST nvmf_nmic 00:31:40.330 ************************************ 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:40.330 * Looking for test storage... 00:31:40.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:40.330 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:40.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.330 --rc genhtml_branch_coverage=1 00:31:40.330 --rc genhtml_function_coverage=1 00:31:40.330 --rc genhtml_legend=1 00:31:40.330 --rc geninfo_all_blocks=1 00:31:40.330 --rc geninfo_unexecuted_blocks=1 00:31:40.330 00:31:40.331 ' 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:40.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.331 --rc genhtml_branch_coverage=1 00:31:40.331 --rc genhtml_function_coverage=1 00:31:40.331 --rc genhtml_legend=1 00:31:40.331 --rc geninfo_all_blocks=1 00:31:40.331 --rc geninfo_unexecuted_blocks=1 00:31:40.331 00:31:40.331 ' 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:40.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.331 --rc genhtml_branch_coverage=1 00:31:40.331 --rc genhtml_function_coverage=1 00:31:40.331 --rc genhtml_legend=1 00:31:40.331 --rc geninfo_all_blocks=1 00:31:40.331 --rc geninfo_unexecuted_blocks=1 00:31:40.331 00:31:40.331 ' 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:40.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.331 --rc genhtml_branch_coverage=1 00:31:40.331 --rc genhtml_function_coverage=1 00:31:40.331 --rc genhtml_legend=1 00:31:40.331 --rc geninfo_all_blocks=1 00:31:40.331 --rc geninfo_unexecuted_blocks=1 00:31:40.331 00:31:40.331 ' 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:40.331 13:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:45.608 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:45.608 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:45.608 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:45.609 Found net devices under 0000:86:00.0: cvl_0_0 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:45.609 Found net devices under 0000:86:00.1: cvl_0_1 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:45.609 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:45.869 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:45.869 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:45.869 13:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:45.869 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:45.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:31:45.869 00:31:45.869 --- 10.0.0.2 ping statistics --- 00:31:45.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:45.869 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:45.869 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:45.869 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:31:45.869 00:31:45.869 --- 10.0.0.1 ping statistics --- 00:31:45.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:45.869 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3067850 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3067850 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3067850 ']' 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:45.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:45.869 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:45.869 [2024-11-19 13:23:49.193608] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:45.869 [2024-11-19 13:23:49.194550] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:31:45.869 [2024-11-19 13:23:49.194586] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:46.128 [2024-11-19 13:23:49.274628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:46.128 [2024-11-19 13:23:49.317754] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:46.128 [2024-11-19 13:23:49.317794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:46.128 [2024-11-19 13:23:49.317802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:46.128 [2024-11-19 13:23:49.317808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:46.128 [2024-11-19 13:23:49.317813] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:46.128 [2024-11-19 13:23:49.319227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:46.128 [2024-11-19 13:23:49.319337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:46.128 [2024-11-19 13:23:49.319442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.128 [2024-11-19 13:23:49.319443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:46.128 [2024-11-19 13:23:49.386514] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:46.128 [2024-11-19 13:23:49.387292] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:46.128 [2024-11-19 13:23:49.387539] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:46.128 [2024-11-19 13:23:49.387921] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:46.128 [2024-11-19 13:23:49.387973] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:46.128 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:46.128 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:46.129 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:46.129 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:46.129 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.129 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:46.129 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:46.129 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.129 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.129 [2024-11-19 13:23:49.460280] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:46.129 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.129 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:46.129 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.129 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.388 Malloc0 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.388 [2024-11-19 13:23:49.536274] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:46.388 test case1: single bdev can't be used in multiple subsystems 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.388 [2024-11-19 13:23:49.567899] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:46.388 [2024-11-19 13:23:49.567919] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:46.388 [2024-11-19 13:23:49.567927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.388 request: 00:31:46.388 { 00:31:46.388 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:46.388 "namespace": { 00:31:46.388 "bdev_name": "Malloc0", 00:31:46.388 "no_auto_visible": false 00:31:46.388 }, 00:31:46.388 "method": "nvmf_subsystem_add_ns", 00:31:46.388 "req_id": 1 00:31:46.388 } 00:31:46.388 Got JSON-RPC error response 00:31:46.388 response: 00:31:46.388 { 00:31:46.388 "code": -32602, 00:31:46.388 "message": "Invalid parameters" 00:31:46.388 } 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:46.388 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:46.389 Adding namespace failed - expected result. 00:31:46.389 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:46.389 test case2: host connect to nvmf target in multiple paths 00:31:46.389 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:46.389 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.389 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:46.389 [2024-11-19 13:23:49.579987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:46.389 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.389 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:46.648 13:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:46.906 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:46.906 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:46.906 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:46.906 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:46.906 13:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:48.808 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:48.808 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:48.808 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:49.077 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:49.077 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:49.077 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:49.077 13:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:49.077 [global] 00:31:49.077 thread=1 00:31:49.077 invalidate=1 00:31:49.077 rw=write 00:31:49.077 time_based=1 00:31:49.077 runtime=1 00:31:49.077 ioengine=libaio 00:31:49.077 direct=1 00:31:49.077 bs=4096 00:31:49.077 iodepth=1 00:31:49.077 norandommap=0 00:31:49.077 numjobs=1 00:31:49.077 00:31:49.077 verify_dump=1 00:31:49.077 verify_backlog=512 00:31:49.077 verify_state_save=0 00:31:49.077 do_verify=1 00:31:49.077 verify=crc32c-intel 00:31:49.077 [job0] 00:31:49.077 filename=/dev/nvme0n1 00:31:49.077 Could not set queue depth (nvme0n1) 00:31:49.337 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:49.337 fio-3.35 00:31:49.337 Starting 1 thread 00:31:50.268 00:31:50.268 job0: (groupid=0, jobs=1): err= 0: pid=3068466: Tue Nov 19 13:23:53 2024 00:31:50.268 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:31:50.268 slat (nsec): min=6067, max=26163, avg=6878.11, stdev=688.85 00:31:50.268 clat (usec): min=193, max=292, avg=217.41, stdev=18.88 00:31:50.268 lat (usec): min=200, max=299, avg=224.29, stdev=18.87 00:31:50.268 clat percentiles (usec): 00:31:50.268 | 1.00th=[ 198], 5.00th=[ 200], 10.00th=[ 200], 20.00th=[ 202], 00:31:50.268 | 30.00th=[ 204], 40.00th=[ 206], 50.00th=[ 208], 60.00th=[ 212], 00:31:50.268 | 70.00th=[ 219], 80.00th=[ 245], 90.00th=[ 249], 95.00th=[ 251], 00:31:50.268 | 99.00th=[ 258], 99.50th=[ 260], 99.90th=[ 265], 99.95th=[ 265], 00:31:50.268 | 99.99th=[ 293] 00:31:50.268 write: IOPS=2647, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1001msec); 0 zone resets 00:31:50.268 slat (nsec): min=7821, max=38937, avg=9805.94, stdev=1125.97 00:31:50.268 clat (usec): min=121, max=381, avg=146.97, stdev=21.77 00:31:50.268 lat (usec): min=130, max=420, avg=156.78, stdev=21.91 00:31:50.268 clat percentiles (usec): 00:31:50.268 | 1.00th=[ 127], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 133], 00:31:50.268 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 141], 00:31:50.268 | 70.00th=[ 143], 80.00th=[ 180], 90.00th=[ 184], 95.00th=[ 186], 00:31:50.268 | 99.00th=[ 194], 99.50th=[ 245], 99.90th=[ 251], 99.95th=[ 253], 00:31:50.268 | 99.99th=[ 383] 00:31:50.268 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:31:50.268 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:31:50.268 lat (usec) : 250=95.93%, 500=4.07% 00:31:50.268 cpu : usr=2.30%, sys=4.60%, ctx=5210, majf=0, minf=1 00:31:50.268 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:50.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.268 issued rwts: total=2560,2650,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.268 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:50.268 00:31:50.268 Run status group 0 (all jobs): 00:31:50.268 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:31:50.268 WRITE: bw=10.3MiB/s (10.8MB/s), 10.3MiB/s-10.3MiB/s (10.8MB/s-10.8MB/s), io=10.4MiB (10.9MB), run=1001-1001msec 00:31:50.268 00:31:50.268 Disk stats (read/write): 00:31:50.268 nvme0n1: ios=2209/2560, merge=0/0, ticks=476/370, in_queue=846, util=91.08% 00:31:50.269 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:50.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:50.527 rmmod nvme_tcp 00:31:50.527 rmmod nvme_fabrics 00:31:50.527 rmmod nvme_keyring 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3067850 ']' 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3067850 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3067850 ']' 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3067850 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:50.527 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3067850 00:31:50.786 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:50.786 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:50.786 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3067850' 00:31:50.786 killing process with pid 3067850 00:31:50.786 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3067850 00:31:50.786 13:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3067850 00:31:50.786 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:50.786 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:50.786 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:50.786 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:50.786 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:50.786 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:50.786 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:50.786 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:50.786 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:50.786 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.786 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:50.786 13:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:53.323 00:31:53.323 real 0m13.138s 00:31:53.323 user 0m24.524s 00:31:53.323 sys 0m6.142s 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.323 ************************************ 00:31:53.323 END TEST nvmf_nmic 00:31:53.323 ************************************ 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:53.323 ************************************ 00:31:53.323 START TEST nvmf_fio_target 00:31:53.323 ************************************ 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:53.323 * Looking for test storage... 00:31:53.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:53.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.323 --rc genhtml_branch_coverage=1 00:31:53.323 --rc genhtml_function_coverage=1 00:31:53.323 --rc genhtml_legend=1 00:31:53.323 --rc geninfo_all_blocks=1 00:31:53.323 --rc geninfo_unexecuted_blocks=1 00:31:53.323 00:31:53.323 ' 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:53.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.323 --rc genhtml_branch_coverage=1 00:31:53.323 --rc genhtml_function_coverage=1 00:31:53.323 --rc genhtml_legend=1 00:31:53.323 --rc geninfo_all_blocks=1 00:31:53.323 --rc geninfo_unexecuted_blocks=1 00:31:53.323 00:31:53.323 ' 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:53.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.323 --rc genhtml_branch_coverage=1 00:31:53.323 --rc genhtml_function_coverage=1 00:31:53.323 --rc genhtml_legend=1 00:31:53.323 --rc geninfo_all_blocks=1 00:31:53.323 --rc geninfo_unexecuted_blocks=1 00:31:53.323 00:31:53.323 ' 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:53.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.323 --rc genhtml_branch_coverage=1 00:31:53.323 --rc genhtml_function_coverage=1 00:31:53.323 --rc genhtml_legend=1 00:31:53.323 --rc geninfo_all_blocks=1 00:31:53.323 --rc geninfo_unexecuted_blocks=1 00:31:53.323 00:31:53.323 ' 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:53.323 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:53.324 13:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:59.895 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:59.895 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:59.895 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:59.895 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:59.895 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:59.895 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:59.895 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:59.895 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:59.895 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:59.895 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:59.895 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:59.895 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:59.895 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:59.895 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:59.895 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:59.895 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:59.895 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:59.895 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:59.896 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:59.896 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:59.896 Found net devices under 0000:86:00.0: cvl_0_0 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:59.896 Found net devices under 0000:86:00.1: cvl_0_1 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:59.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:31:59.896 00:31:59.896 --- 10.0.0.2 ping statistics --- 00:31:59.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.896 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:59.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:31:59.896 00:31:59.896 --- 10.0.0.1 ping statistics --- 00:31:59.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.896 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:31:59.896 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3072354 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3072354 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3072354 ']' 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:59.897 [2024-11-19 13:24:02.446551] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:59.897 [2024-11-19 13:24:02.447470] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:31:59.897 [2024-11-19 13:24:02.447503] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.897 [2024-11-19 13:24:02.527715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:59.897 [2024-11-19 13:24:02.570362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:59.897 [2024-11-19 13:24:02.570401] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:59.897 [2024-11-19 13:24:02.570408] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:59.897 [2024-11-19 13:24:02.570414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:59.897 [2024-11-19 13:24:02.570419] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:59.897 [2024-11-19 13:24:02.571995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.897 [2024-11-19 13:24:02.572108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:59.897 [2024-11-19 13:24:02.572214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.897 [2024-11-19 13:24:02.572215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:59.897 [2024-11-19 13:24:02.639424] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:59.897 [2024-11-19 13:24:02.640044] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:59.897 [2024-11-19 13:24:02.640392] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:59.897 [2024-11-19 13:24:02.640748] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:59.897 [2024-11-19 13:24:02.640801] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:59.897 [2024-11-19 13:24:02.880855] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:59.897 13:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:59.897 13:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:59.897 13:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:00.155 13:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:00.156 13:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:00.415 13:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:00.415 13:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:00.674 13:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:00.674 13:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:00.674 13:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:00.933 13:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:00.933 13:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:01.192 13:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:01.192 13:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:01.451 13:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:01.451 13:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:01.451 13:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:01.709 13:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:01.709 13:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:01.967 13:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:01.967 13:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:02.224 13:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:02.224 [2024-11-19 13:24:05.592786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:02.482 13:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:02.482 13:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:02.741 13:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:02.998 13:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:02.998 13:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:32:02.998 13:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:02.998 13:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:32:02.998 13:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:32:02.998 13:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:32:04.893 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:04.893 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:04.893 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:04.893 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:32:04.893 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:04.893 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:32:04.893 13:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:05.175 [global] 00:32:05.175 thread=1 00:32:05.175 invalidate=1 00:32:05.175 rw=write 00:32:05.175 time_based=1 00:32:05.175 runtime=1 00:32:05.175 ioengine=libaio 00:32:05.175 direct=1 00:32:05.175 bs=4096 00:32:05.175 iodepth=1 00:32:05.175 norandommap=0 00:32:05.175 numjobs=1 00:32:05.175 00:32:05.175 verify_dump=1 00:32:05.175 verify_backlog=512 00:32:05.175 verify_state_save=0 00:32:05.175 do_verify=1 00:32:05.175 verify=crc32c-intel 00:32:05.175 [job0] 00:32:05.175 filename=/dev/nvme0n1 00:32:05.175 [job1] 00:32:05.175 filename=/dev/nvme0n2 00:32:05.175 [job2] 00:32:05.175 filename=/dev/nvme0n3 00:32:05.175 [job3] 00:32:05.175 filename=/dev/nvme0n4 00:32:05.175 Could not set queue depth (nvme0n1) 00:32:05.175 Could not set queue depth (nvme0n2) 00:32:05.175 Could not set queue depth (nvme0n3) 00:32:05.175 Could not set queue depth (nvme0n4) 00:32:05.436 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:05.436 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:05.436 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:05.437 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:05.437 fio-3.35 00:32:05.437 Starting 4 threads 00:32:06.815 00:32:06.815 job0: (groupid=0, jobs=1): err= 0: pid=3073862: Tue Nov 19 13:24:09 2024 00:32:06.815 read: IOPS=20, BW=82.8KiB/s (84.7kB/s)(84.0KiB/1015msec) 00:32:06.815 slat (nsec): min=9910, max=26995, avg=25076.19, stdev=3576.64 00:32:06.815 clat (usec): min=40650, max=41952, avg=40998.03, stdev=232.64 00:32:06.815 lat (usec): min=40660, max=41977, avg=41023.11, stdev=233.75 00:32:06.815 clat percentiles (usec): 00:32:06.815 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:06.815 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:06.815 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:06.815 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:06.815 | 99.99th=[42206] 00:32:06.815 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:32:06.815 slat (usec): min=10, max=40837, avg=111.15, stdev=1849.37 00:32:06.815 clat (usec): min=144, max=299, avg=184.77, stdev=13.28 00:32:06.815 lat (usec): min=156, max=41131, avg=295.92, stdev=1854.72 00:32:06.815 clat percentiles (usec): 00:32:06.815 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 176], 00:32:06.815 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:32:06.815 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 198], 95.00th=[ 204], 00:32:06.815 | 99.00th=[ 217], 99.50th=[ 243], 99.90th=[ 302], 99.95th=[ 302], 00:32:06.815 | 99.99th=[ 302] 00:32:06.815 bw ( KiB/s): min= 4087, max= 4087, per=19.95%, avg=4087.00, stdev= 0.00, samples=1 00:32:06.815 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:32:06.815 lat (usec) : 250=95.68%, 500=0.38% 00:32:06.815 lat (msec) : 50=3.94% 00:32:06.815 cpu : usr=0.39%, sys=1.08%, ctx=537, majf=0, minf=1 00:32:06.815 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:06.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.815 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.815 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:06.815 job1: (groupid=0, jobs=1): err= 0: pid=3073863: Tue Nov 19 13:24:09 2024 00:32:06.815 read: IOPS=1535, BW=6143KiB/s (6291kB/s)(6168KiB/1004msec) 00:32:06.815 slat (nsec): min=5946, max=23916, avg=7104.37, stdev=1320.81 00:32:06.815 clat (usec): min=183, max=41045, avg=399.76, stdev=2536.10 00:32:06.815 lat (usec): min=189, max=41066, avg=406.86, stdev=2536.88 00:32:06.815 clat percentiles (usec): 00:32:06.815 | 1.00th=[ 192], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 208], 00:32:06.815 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 247], 00:32:06.815 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 293], 00:32:06.815 | 99.00th=[ 445], 99.50th=[ 469], 99.90th=[41157], 99.95th=[41157], 00:32:06.815 | 99.99th=[41157] 00:32:06.815 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:32:06.816 slat (nsec): min=8748, max=40342, avg=9972.86, stdev=1318.44 00:32:06.816 clat (usec): min=117, max=341, avg=170.05, stdev=32.42 00:32:06.816 lat (usec): min=126, max=351, avg=180.02, stdev=32.53 00:32:06.816 clat percentiles (usec): 00:32:06.816 | 1.00th=[ 126], 5.00th=[ 131], 10.00th=[ 137], 20.00th=[ 143], 00:32:06.816 | 30.00th=[ 149], 40.00th=[ 157], 50.00th=[ 165], 60.00th=[ 174], 00:32:06.816 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 239], 95.00th=[ 243], 00:32:06.816 | 99.00th=[ 249], 99.50th=[ 253], 99.90th=[ 293], 99.95th=[ 293], 00:32:06.816 | 99.99th=[ 343] 00:32:06.816 bw ( KiB/s): min= 5296, max=11065, per=39.93%, avg=8180.50, stdev=4079.30, samples=2 00:32:06.816 iops : min= 1324, max= 2766, avg=2045.00, stdev=1019.65, samples=2 00:32:06.816 lat (usec) : 250=86.49%, 500=13.31%, 750=0.03% 00:32:06.816 lat (msec) : 50=0.17% 00:32:06.816 cpu : usr=1.30%, sys=3.69%, ctx=3590, majf=0, minf=2 00:32:06.816 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:06.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.816 issued rwts: total=1542,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.816 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:06.816 job2: (groupid=0, jobs=1): err= 0: pid=3073864: Tue Nov 19 13:24:09 2024 00:32:06.816 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:32:06.816 slat (nsec): min=6740, max=28052, avg=7871.70, stdev=1641.47 00:32:06.816 clat (usec): min=197, max=40995, avg=763.63, stdev=4474.85 00:32:06.816 lat (usec): min=204, max=41023, avg=771.50, stdev=4475.91 00:32:06.816 clat percentiles (usec): 00:32:06.816 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 231], 00:32:06.816 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 255], 00:32:06.816 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 326], 00:32:06.816 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:06.816 | 99.99th=[41157] 00:32:06.816 write: IOPS=1100, BW=4404KiB/s (4509kB/s)(4408KiB/1001msec); 0 zone resets 00:32:06.816 slat (usec): min=9, max=9808, avg=20.62, stdev=295.13 00:32:06.816 clat (usec): min=136, max=329, avg=165.16, stdev=23.67 00:32:06.816 lat (usec): min=148, max=10093, avg=185.78, stdev=299.75 00:32:06.816 clat percentiles (usec): 00:32:06.816 | 1.00th=[ 141], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 145], 00:32:06.816 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 165], 00:32:06.816 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 196], 95.00th=[ 206], 00:32:06.816 | 99.00th=[ 237], 99.50th=[ 245], 99.90th=[ 293], 99.95th=[ 330], 00:32:06.816 | 99.99th=[ 330] 00:32:06.816 bw ( KiB/s): min= 4096, max= 4096, per=20.00%, avg=4096.00, stdev= 0.00, samples=1 00:32:06.816 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:06.816 lat (usec) : 250=76.43%, 500=22.95% 00:32:06.816 lat (msec) : 50=0.61% 00:32:06.816 cpu : usr=0.70%, sys=2.50%, ctx=2128, majf=0, minf=1 00:32:06.816 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:06.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.816 issued rwts: total=1024,1102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.816 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:06.816 job3: (groupid=0, jobs=1): err= 0: pid=3073865: Tue Nov 19 13:24:09 2024 00:32:06.816 read: IOPS=1075, BW=4303KiB/s (4407kB/s)(4368KiB/1015msec) 00:32:06.816 slat (nsec): min=6669, max=35577, avg=7724.77, stdev=1492.75 00:32:06.816 clat (usec): min=217, max=41952, avg=609.47, stdev=3692.50 00:32:06.816 lat (usec): min=224, max=41968, avg=617.19, stdev=3693.34 00:32:06.816 clat percentiles (usec): 00:32:06.816 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 243], 00:32:06.816 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 262], 60.00th=[ 289], 00:32:06.816 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 314], 00:32:06.816 | 99.00th=[ 433], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:32:06.816 | 99.99th=[42206] 00:32:06.816 write: IOPS=1513, BW=6053KiB/s (6198kB/s)(6144KiB/1015msec); 0 zone resets 00:32:06.816 slat (usec): min=8, max=40752, avg=43.43, stdev=1065.49 00:32:06.816 clat (usec): min=131, max=556, avg=174.46, stdev=28.64 00:32:06.816 lat (usec): min=142, max=40948, avg=217.88, stdev=1066.68 00:32:06.816 clat percentiles (usec): 00:32:06.816 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 157], 00:32:06.816 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 00:32:06.816 | 70.00th=[ 174], 80.00th=[ 186], 90.00th=[ 217], 95.00th=[ 227], 00:32:06.816 | 99.00th=[ 249], 99.50th=[ 262], 99.90th=[ 412], 99.95th=[ 553], 00:32:06.816 | 99.99th=[ 553] 00:32:06.816 bw ( KiB/s): min= 4096, max= 8175, per=29.95%, avg=6135.50, stdev=2884.29, samples=2 00:32:06.816 iops : min= 1024, max= 2043, avg=1533.50, stdev=720.54, samples=2 00:32:06.816 lat (usec) : 250=73.59%, 500=26.03%, 750=0.04% 00:32:06.816 lat (msec) : 50=0.34% 00:32:06.816 cpu : usr=0.79%, sys=3.06%, ctx=2631, majf=0, minf=1 00:32:06.816 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:06.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.816 issued rwts: total=1092,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.816 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:06.816 00:32:06.816 Run status group 0 (all jobs): 00:32:06.816 READ: bw=14.2MiB/s (14.8MB/s), 82.8KiB/s-6143KiB/s (84.7kB/s-6291kB/s), io=14.4MiB (15.1MB), run=1001-1015msec 00:32:06.816 WRITE: bw=20.0MiB/s (21.0MB/s), 2018KiB/s-8159KiB/s (2066kB/s-8355kB/s), io=20.3MiB (21.3MB), run=1001-1015msec 00:32:06.816 00:32:06.816 Disk stats (read/write): 00:32:06.816 nvme0n1: ios=67/512, merge=0/0, ticks=1334/94, in_queue=1428, util=87.27% 00:32:06.816 nvme0n2: ios=1586/2048, merge=0/0, ticks=407/341, in_queue=748, util=84.95% 00:32:06.816 nvme0n3: ios=536/673, merge=0/0, ticks=1553/115, in_queue=1668, util=92.16% 00:32:06.816 nvme0n4: ios=1109/1536, merge=0/0, ticks=1313/259, in_queue=1572, util=99.89% 00:32:06.816 13:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:06.816 [global] 00:32:06.816 thread=1 00:32:06.816 invalidate=1 00:32:06.816 rw=randwrite 00:32:06.816 time_based=1 00:32:06.816 runtime=1 00:32:06.816 ioengine=libaio 00:32:06.816 direct=1 00:32:06.816 bs=4096 00:32:06.816 iodepth=1 00:32:06.816 norandommap=0 00:32:06.816 numjobs=1 00:32:06.816 00:32:06.816 verify_dump=1 00:32:06.816 verify_backlog=512 00:32:06.816 verify_state_save=0 00:32:06.816 do_verify=1 00:32:06.816 verify=crc32c-intel 00:32:06.816 [job0] 00:32:06.816 filename=/dev/nvme0n1 00:32:06.816 [job1] 00:32:06.816 filename=/dev/nvme0n2 00:32:06.816 [job2] 00:32:06.816 filename=/dev/nvme0n3 00:32:06.816 [job3] 00:32:06.816 filename=/dev/nvme0n4 00:32:06.816 Could not set queue depth (nvme0n1) 00:32:06.816 Could not set queue depth (nvme0n2) 00:32:06.816 Could not set queue depth (nvme0n3) 00:32:06.816 Could not set queue depth (nvme0n4) 00:32:07.074 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:07.074 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:07.074 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:07.074 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:07.074 fio-3.35 00:32:07.074 Starting 4 threads 00:32:08.448 00:32:08.448 job0: (groupid=0, jobs=1): err= 0: pid=3074240: Tue Nov 19 13:24:11 2024 00:32:08.448 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:32:08.448 slat (nsec): min=8461, max=59826, avg=9883.18, stdev=2152.03 00:32:08.448 clat (usec): min=195, max=1304, avg=244.50, stdev=42.67 00:32:08.448 lat (usec): min=217, max=1314, avg=254.38, stdev=42.83 00:32:08.448 clat percentiles (usec): 00:32:08.448 | 1.00th=[ 219], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 233], 00:32:08.448 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 243], 00:32:08.448 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 262], 00:32:08.448 | 99.00th=[ 343], 99.50th=[ 461], 99.90th=[ 971], 99.95th=[ 1057], 00:32:08.448 | 99.99th=[ 1303] 00:32:08.448 write: IOPS=2512, BW=9.81MiB/s (10.3MB/s)(9.82MiB/1001msec); 0 zone resets 00:32:08.448 slat (nsec): min=11688, max=42224, avg=13816.46, stdev=2251.83 00:32:08.448 clat (usec): min=136, max=381, avg=170.46, stdev=19.72 00:32:08.448 lat (usec): min=148, max=394, avg=184.28, stdev=20.09 00:32:08.448 clat percentiles (usec): 00:32:08.448 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 159], 00:32:08.448 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:32:08.448 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 188], 95.00th=[ 223], 00:32:08.448 | 99.00th=[ 247], 99.50th=[ 253], 99.90th=[ 269], 99.95th=[ 273], 00:32:08.448 | 99.99th=[ 383] 00:32:08.448 bw ( KiB/s): min= 9632, max= 9632, per=39.80%, avg=9632.00, stdev= 0.00, samples=1 00:32:08.448 iops : min= 2408, max= 2408, avg=2408.00, stdev= 0.00, samples=1 00:32:08.448 lat (usec) : 250=93.01%, 500=6.86%, 750=0.04%, 1000=0.04% 00:32:08.448 lat (msec) : 2=0.04% 00:32:08.448 cpu : usr=4.90%, sys=7.70%, ctx=4566, majf=0, minf=1 00:32:08.448 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:08.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.448 issued rwts: total=2048,2515,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.448 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:08.448 job1: (groupid=0, jobs=1): err= 0: pid=3074241: Tue Nov 19 13:24:11 2024 00:32:08.448 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:32:08.448 slat (nsec): min=9513, max=22048, avg=13715.00, stdev=3968.68 00:32:08.448 clat (usec): min=40530, max=41927, avg=41011.51, stdev=241.01 00:32:08.448 lat (usec): min=40539, max=41939, avg=41025.23, stdev=240.58 00:32:08.448 clat percentiles (usec): 00:32:08.448 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:08.448 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:08.448 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:08.448 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:32:08.448 | 99.99th=[41681] 00:32:08.448 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:32:08.448 slat (nsec): min=10340, max=73088, avg=12144.15, stdev=3115.47 00:32:08.448 clat (usec): min=151, max=273, avg=184.02, stdev=19.08 00:32:08.448 lat (usec): min=162, max=284, avg=196.16, stdev=19.56 00:32:08.448 clat percentiles (usec): 00:32:08.448 | 1.00th=[ 155], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:32:08.448 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 188], 00:32:08.448 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 215], 00:32:08.448 | 99.00th=[ 245], 99.50th=[ 255], 99.90th=[ 273], 99.95th=[ 273], 00:32:08.448 | 99.99th=[ 273] 00:32:08.448 bw ( KiB/s): min= 4096, max= 4096, per=16.92%, avg=4096.00, stdev= 0.00, samples=1 00:32:08.448 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:08.448 lat (usec) : 250=94.94%, 500=0.94% 00:32:08.448 lat (msec) : 50=4.12% 00:32:08.448 cpu : usr=0.30%, sys=1.10%, ctx=535, majf=0, minf=1 00:32:08.448 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:08.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.448 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.448 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:08.448 job2: (groupid=0, jobs=1): err= 0: pid=3074242: Tue Nov 19 13:24:11 2024 00:32:08.448 read: IOPS=2162, BW=8651KiB/s (8859kB/s)(8660KiB/1001msec) 00:32:08.448 slat (nsec): min=7420, max=35229, avg=8558.59, stdev=1290.43 00:32:08.448 clat (usec): min=186, max=40470, avg=235.97, stdev=865.28 00:32:08.448 lat (usec): min=196, max=40480, avg=244.53, stdev=865.32 00:32:08.448 clat percentiles (usec): 00:32:08.448 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 206], 20.00th=[ 210], 00:32:08.448 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 217], 60.00th=[ 219], 00:32:08.448 | 70.00th=[ 221], 80.00th=[ 223], 90.00th=[ 229], 95.00th=[ 239], 00:32:08.448 | 99.00th=[ 253], 99.50th=[ 255], 99.90th=[ 310], 99.95th=[ 857], 00:32:08.448 | 99.99th=[40633] 00:32:08.448 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:32:08.448 slat (nsec): min=10612, max=43718, avg=11881.27, stdev=1738.31 00:32:08.448 clat (usec): min=141, max=865, avg=166.59, stdev=20.23 00:32:08.448 lat (usec): min=152, max=877, avg=178.47, stdev=20.43 00:32:08.448 clat percentiles (usec): 00:32:08.448 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 159], 00:32:08.448 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:32:08.448 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 178], 95.00th=[ 184], 00:32:08.448 | 99.00th=[ 194], 99.50th=[ 206], 99.90th=[ 371], 99.95th=[ 660], 00:32:08.448 | 99.99th=[ 865] 00:32:08.448 bw ( KiB/s): min= 9840, max= 9840, per=40.66%, avg=9840.00, stdev= 0.00, samples=1 00:32:08.448 iops : min= 2460, max= 2460, avg=2460.00, stdev= 0.00, samples=1 00:32:08.448 lat (usec) : 250=99.24%, 500=0.68%, 750=0.02%, 1000=0.04% 00:32:08.448 lat (msec) : 50=0.02% 00:32:08.448 cpu : usr=3.30%, sys=8.20%, ctx=4726, majf=0, minf=1 00:32:08.448 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:08.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.448 issued rwts: total=2165,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.448 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:08.448 job3: (groupid=0, jobs=1): err= 0: pid=3074243: Tue Nov 19 13:24:11 2024 00:32:08.448 read: IOPS=22, BW=91.3KiB/s (93.5kB/s)(92.0KiB/1008msec) 00:32:08.448 slat (nsec): min=9046, max=25301, avg=20201.87, stdev=5065.79 00:32:08.448 clat (usec): min=445, max=41087, avg=39200.57, stdev=8448.89 00:32:08.448 lat (usec): min=454, max=41109, avg=39220.77, stdev=8451.22 00:32:08.448 clat percentiles (usec): 00:32:08.448 | 1.00th=[ 445], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:32:08.448 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:08.449 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:08.449 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:08.449 | 99.99th=[41157] 00:32:08.449 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:32:08.449 slat (nsec): min=10004, max=39761, avg=12026.46, stdev=2205.05 00:32:08.449 clat (usec): min=138, max=359, avg=191.02, stdev=21.95 00:32:08.449 lat (usec): min=148, max=370, avg=203.05, stdev=22.56 00:32:08.449 clat percentiles (usec): 00:32:08.449 | 1.00th=[ 147], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 176], 00:32:08.449 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:32:08.449 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 223], 00:32:08.449 | 99.00th=[ 260], 99.50th=[ 310], 99.90th=[ 359], 99.95th=[ 359], 00:32:08.449 | 99.99th=[ 359] 00:32:08.449 bw ( KiB/s): min= 4096, max= 4096, per=16.92%, avg=4096.00, stdev= 0.00, samples=1 00:32:08.449 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:08.449 lat (usec) : 250=94.58%, 500=1.31% 00:32:08.449 lat (msec) : 50=4.11% 00:32:08.449 cpu : usr=0.30%, sys=1.09%, ctx=535, majf=0, minf=2 00:32:08.449 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:08.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.449 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.449 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:08.449 00:32:08.449 Run status group 0 (all jobs): 00:32:08.449 READ: bw=16.5MiB/s (17.3MB/s), 87.6KiB/s-8651KiB/s (89.7kB/s-8859kB/s), io=16.6MiB (17.4MB), run=1001-1008msec 00:32:08.449 WRITE: bw=23.6MiB/s (24.8MB/s), 2032KiB/s-9.99MiB/s (2081kB/s-10.5MB/s), io=23.8MiB (25.0MB), run=1001-1008msec 00:32:08.449 00:32:08.449 Disk stats (read/write): 00:32:08.449 nvme0n1: ios=1837/2048, merge=0/0, ticks=1057/325, in_queue=1382, util=98.40% 00:32:08.449 nvme0n2: ios=48/512, merge=0/0, ticks=1037/92, in_queue=1129, util=96.75% 00:32:08.449 nvme0n3: ios=1965/2048, merge=0/0, ticks=1385/323, in_queue=1708, util=97.82% 00:32:08.449 nvme0n4: ios=75/512, merge=0/0, ticks=760/96, in_queue=856, util=90.89% 00:32:08.449 13:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:08.449 [global] 00:32:08.449 thread=1 00:32:08.449 invalidate=1 00:32:08.449 rw=write 00:32:08.449 time_based=1 00:32:08.449 runtime=1 00:32:08.449 ioengine=libaio 00:32:08.449 direct=1 00:32:08.449 bs=4096 00:32:08.449 iodepth=128 00:32:08.449 norandommap=0 00:32:08.449 numjobs=1 00:32:08.449 00:32:08.449 verify_dump=1 00:32:08.449 verify_backlog=512 00:32:08.449 verify_state_save=0 00:32:08.449 do_verify=1 00:32:08.449 verify=crc32c-intel 00:32:08.449 [job0] 00:32:08.449 filename=/dev/nvme0n1 00:32:08.449 [job1] 00:32:08.449 filename=/dev/nvme0n2 00:32:08.449 [job2] 00:32:08.449 filename=/dev/nvme0n3 00:32:08.449 [job3] 00:32:08.449 filename=/dev/nvme0n4 00:32:08.449 Could not set queue depth (nvme0n1) 00:32:08.449 Could not set queue depth (nvme0n2) 00:32:08.449 Could not set queue depth (nvme0n3) 00:32:08.449 Could not set queue depth (nvme0n4) 00:32:08.449 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:08.449 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:08.449 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:08.449 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:08.449 fio-3.35 00:32:08.449 Starting 4 threads 00:32:09.825 00:32:09.825 job0: (groupid=0, jobs=1): err= 0: pid=3074610: Tue Nov 19 13:24:13 2024 00:32:09.825 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:32:09.825 slat (nsec): min=1190, max=16185k, avg=84153.35, stdev=670007.63 00:32:09.825 clat (usec): min=5005, max=27216, avg=11080.25, stdev=3514.82 00:32:09.825 lat (usec): min=5016, max=27658, avg=11164.40, stdev=3566.50 00:32:09.825 clat percentiles (usec): 00:32:09.825 | 1.00th=[ 5997], 5.00th=[ 6849], 10.00th=[ 7963], 20.00th=[ 8586], 00:32:09.825 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[10159], 60.00th=[11338], 00:32:09.825 | 70.00th=[12387], 80.00th=[12911], 90.00th=[15533], 95.00th=[17957], 00:32:09.825 | 99.00th=[23987], 99.50th=[23987], 99.90th=[25822], 99.95th=[25822], 00:32:09.825 | 99.99th=[27132] 00:32:09.825 write: IOPS=6038, BW=23.6MiB/s (24.7MB/s)(23.7MiB/1003msec); 0 zone resets 00:32:09.825 slat (usec): min=2, max=12075, avg=80.98, stdev=558.23 00:32:09.825 clat (usec): min=1958, max=25818, avg=10386.82, stdev=3048.62 00:32:09.825 lat (usec): min=1967, max=25823, avg=10467.80, stdev=3088.88 00:32:09.825 clat percentiles (usec): 00:32:09.825 | 1.00th=[ 4883], 5.00th=[ 6456], 10.00th=[ 7635], 20.00th=[ 7963], 00:32:09.825 | 30.00th=[ 8717], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:32:09.825 | 70.00th=[10814], 80.00th=[11994], 90.00th=[14091], 95.00th=[16581], 00:32:09.825 | 99.00th=[21627], 99.50th=[23462], 99.90th=[24249], 99.95th=[24511], 00:32:09.825 | 99.99th=[25822] 00:32:09.825 bw ( KiB/s): min=22880, max=24560, per=33.83%, avg=23720.00, stdev=1187.94, samples=2 00:32:09.825 iops : min= 5720, max= 6140, avg=5930.00, stdev=296.98, samples=2 00:32:09.825 lat (msec) : 2=0.03%, 4=0.27%, 10=48.21%, 20=49.12%, 50=2.38% 00:32:09.825 cpu : usr=4.79%, sys=6.59%, ctx=474, majf=0, minf=1 00:32:09.825 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:32:09.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:09.825 issued rwts: total=5632,6057,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.825 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:09.825 job1: (groupid=0, jobs=1): err= 0: pid=3074611: Tue Nov 19 13:24:13 2024 00:32:09.825 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:32:09.825 slat (nsec): min=1433, max=14769k, avg=109302.37, stdev=778494.15 00:32:09.825 clat (usec): min=5042, max=46997, avg=14984.65, stdev=8320.16 00:32:09.825 lat (usec): min=5048, max=47007, avg=15093.96, stdev=8380.61 00:32:09.825 clat percentiles (usec): 00:32:09.825 | 1.00th=[ 5080], 5.00th=[ 7701], 10.00th=[ 8848], 20.00th=[ 9634], 00:32:09.825 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11600], 60.00th=[11994], 00:32:09.825 | 70.00th=[13042], 80.00th=[22152], 90.00th=[29492], 95.00th=[34341], 00:32:09.825 | 99.00th=[38011], 99.50th=[38536], 99.90th=[46924], 99.95th=[46924], 00:32:09.825 | 99.99th=[46924] 00:32:09.825 write: IOPS=4076, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:32:09.825 slat (usec): min=2, max=20196, avg=117.43, stdev=798.00 00:32:09.825 clat (usec): min=401, max=71207, avg=15755.58, stdev=11507.21 00:32:09.825 lat (usec): min=434, max=71216, avg=15873.01, stdev=11565.70 00:32:09.825 clat percentiles (usec): 00:32:09.825 | 1.00th=[ 3687], 5.00th=[ 5211], 10.00th=[ 7635], 20.00th=[ 9765], 00:32:09.825 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10814], 60.00th=[11600], 00:32:09.825 | 70.00th=[16450], 80.00th=[20579], 90.00th=[30540], 95.00th=[37487], 00:32:09.825 | 99.00th=[70779], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:32:09.825 | 99.99th=[70779] 00:32:09.825 bw ( KiB/s): min=10880, max=21888, per=23.37%, avg=16384.00, stdev=7783.83, samples=2 00:32:09.825 iops : min= 2720, max= 5472, avg=4096.00, stdev=1945.96, samples=2 00:32:09.825 lat (usec) : 500=0.04%, 750=0.02% 00:32:09.825 lat (msec) : 2=0.27%, 4=0.60%, 10=22.67%, 20=54.38%, 50=20.87% 00:32:09.825 lat (msec) : 100=1.16% 00:32:09.825 cpu : usr=3.98%, sys=5.18%, ctx=322, majf=0, minf=1 00:32:09.825 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:32:09.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:09.825 issued rwts: total=4096,4097,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.825 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:09.825 job2: (groupid=0, jobs=1): err= 0: pid=3074612: Tue Nov 19 13:24:13 2024 00:32:09.825 read: IOPS=3821, BW=14.9MiB/s (15.7MB/s)(15.0MiB/1005msec) 00:32:09.825 slat (nsec): min=1179, max=18085k, avg=105759.49, stdev=823669.77 00:32:09.825 clat (usec): min=468, max=55440, avg=14754.89, stdev=6825.25 00:32:09.825 lat (usec): min=623, max=55450, avg=14860.65, stdev=6884.68 00:32:09.825 clat percentiles (usec): 00:32:09.825 | 1.00th=[ 1811], 5.00th=[ 3294], 10.00th=[ 7046], 20.00th=[10290], 00:32:09.825 | 30.00th=[13042], 40.00th=[13829], 50.00th=[14222], 60.00th=[14877], 00:32:09.825 | 70.00th=[16450], 80.00th=[17957], 90.00th=[22414], 95.00th=[25560], 00:32:09.825 | 99.00th=[46924], 99.50th=[51119], 99.90th=[55313], 99.95th=[55313], 00:32:09.825 | 99.99th=[55313] 00:32:09.825 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:32:09.825 slat (nsec): min=1996, max=12720k, avg=114333.99, stdev=833966.12 00:32:09.825 clat (usec): min=297, max=84685, avg=17257.12, stdev=13223.33 00:32:09.825 lat (usec): min=310, max=84696, avg=17371.46, stdev=13290.50 00:32:09.825 clat percentiles (usec): 00:32:09.825 | 1.00th=[ 979], 5.00th=[ 4113], 10.00th=[ 6194], 20.00th=[10290], 00:32:09.825 | 30.00th=[12256], 40.00th=[13304], 50.00th=[14091], 60.00th=[15401], 00:32:09.825 | 70.00th=[16712], 80.00th=[19530], 90.00th=[33162], 95.00th=[45876], 00:32:09.825 | 99.00th=[74974], 99.50th=[80217], 99.90th=[82314], 99.95th=[84411], 00:32:09.825 | 99.99th=[84411] 00:32:09.825 bw ( KiB/s): min=16384, max=16384, per=23.37%, avg=16384.00, stdev= 0.00, samples=2 00:32:09.825 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:32:09.825 lat (usec) : 500=0.09%, 750=0.10%, 1000=0.49% 00:32:09.825 lat (msec) : 2=1.03%, 4=3.93%, 10=12.30%, 20=67.19%, 50=12.46% 00:32:09.825 lat (msec) : 100=2.41% 00:32:09.825 cpu : usr=2.99%, sys=3.88%, ctx=292, majf=0, minf=1 00:32:09.825 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:32:09.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:09.825 issued rwts: total=3841,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.825 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:09.825 job3: (groupid=0, jobs=1): err= 0: pid=3074613: Tue Nov 19 13:24:13 2024 00:32:09.825 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:32:09.826 slat (nsec): min=1563, max=15904k, avg=136757.76, stdev=933004.73 00:32:09.826 clat (usec): min=2442, max=44935, avg=18416.15, stdev=8035.39 00:32:09.826 lat (usec): min=2445, max=46654, avg=18552.91, stdev=8093.36 00:32:09.826 clat percentiles (usec): 00:32:09.826 | 1.00th=[ 5080], 5.00th=[10028], 10.00th=[11338], 20.00th=[12518], 00:32:09.826 | 30.00th=[13042], 40.00th=[14222], 50.00th=[15139], 60.00th=[17433], 00:32:09.826 | 70.00th=[20841], 80.00th=[25560], 90.00th=[32113], 95.00th=[35390], 00:32:09.826 | 99.00th=[38536], 99.50th=[41157], 99.90th=[42730], 99.95th=[44303], 00:32:09.826 | 99.99th=[44827] 00:32:09.826 write: IOPS=3351, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1004msec); 0 zone resets 00:32:09.826 slat (usec): min=2, max=22440, avg=165.05, stdev=1041.75 00:32:09.826 clat (usec): min=532, max=58746, avg=20929.19, stdev=12504.42 00:32:09.826 lat (usec): min=4086, max=58758, avg=21094.24, stdev=12608.82 00:32:09.826 clat percentiles (usec): 00:32:09.826 | 1.00th=[ 4293], 5.00th=[ 5014], 10.00th=[10814], 20.00th=[12518], 00:32:09.826 | 30.00th=[13829], 40.00th=[15533], 50.00th=[16712], 60.00th=[18220], 00:32:09.826 | 70.00th=[23462], 80.00th=[26870], 90.00th=[35914], 95.00th=[54789], 00:32:09.826 | 99.00th=[58459], 99.50th=[58459], 99.90th=[58983], 99.95th=[58983], 00:32:09.826 | 99.99th=[58983] 00:32:09.826 bw ( KiB/s): min=10032, max=15864, per=18.47%, avg=12948.00, stdev=4123.85, samples=2 00:32:09.826 iops : min= 2508, max= 3966, avg=3237.00, stdev=1030.96, samples=2 00:32:09.826 lat (usec) : 750=0.02% 00:32:09.826 lat (msec) : 4=0.08%, 10=6.66%, 20=60.37%, 50=29.10%, 100=3.78% 00:32:09.826 cpu : usr=2.59%, sys=3.99%, ctx=283, majf=0, minf=2 00:32:09.826 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:32:09.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:09.826 issued rwts: total=3072,3365,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.826 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:09.826 00:32:09.826 Run status group 0 (all jobs): 00:32:09.826 READ: bw=64.7MiB/s (67.8MB/s), 12.0MiB/s-21.9MiB/s (12.5MB/s-23.0MB/s), io=65.0MiB (68.2MB), run=1003-1005msec 00:32:09.826 WRITE: bw=68.5MiB/s (71.8MB/s), 13.1MiB/s-23.6MiB/s (13.7MB/s-24.7MB/s), io=68.8MiB (72.2MB), run=1003-1005msec 00:32:09.826 00:32:09.826 Disk stats (read/write): 00:32:09.826 nvme0n1: ios=4659/4726, merge=0/0, ticks=41882/39786, in_queue=81668, util=88.78% 00:32:09.826 nvme0n2: ios=2922/3072, merge=0/0, ticks=21579/19807, in_queue=41386, util=92.70% 00:32:09.826 nvme0n3: ios=3132/3229, merge=0/0, ticks=41972/45875, in_queue=87847, util=97.06% 00:32:09.826 nvme0n4: ios=2596/2679, merge=0/0, ticks=22944/25346, in_queue=48290, util=99.45% 00:32:09.826 13:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:09.826 [global] 00:32:09.826 thread=1 00:32:09.826 invalidate=1 00:32:09.826 rw=randwrite 00:32:09.826 time_based=1 00:32:09.826 runtime=1 00:32:09.826 ioengine=libaio 00:32:09.826 direct=1 00:32:09.826 bs=4096 00:32:09.826 iodepth=128 00:32:09.826 norandommap=0 00:32:09.826 numjobs=1 00:32:09.826 00:32:09.826 verify_dump=1 00:32:09.826 verify_backlog=512 00:32:09.826 verify_state_save=0 00:32:09.826 do_verify=1 00:32:09.826 verify=crc32c-intel 00:32:09.826 [job0] 00:32:09.826 filename=/dev/nvme0n1 00:32:09.826 [job1] 00:32:09.826 filename=/dev/nvme0n2 00:32:09.826 [job2] 00:32:09.826 filename=/dev/nvme0n3 00:32:09.826 [job3] 00:32:09.826 filename=/dev/nvme0n4 00:32:09.826 Could not set queue depth (nvme0n1) 00:32:09.826 Could not set queue depth (nvme0n2) 00:32:09.826 Could not set queue depth (nvme0n3) 00:32:09.826 Could not set queue depth (nvme0n4) 00:32:10.084 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:10.084 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:10.084 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:10.084 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:10.084 fio-3.35 00:32:10.084 Starting 4 threads 00:32:11.457 00:32:11.457 job0: (groupid=0, jobs=1): err= 0: pid=3074984: Tue Nov 19 13:24:14 2024 00:32:11.457 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:32:11.457 slat (nsec): min=1060, max=22040k, avg=145762.90, stdev=1072540.13 00:32:11.457 clat (usec): min=1628, max=58611, avg=17530.22, stdev=10723.84 00:32:11.457 lat (usec): min=1636, max=58618, avg=17675.98, stdev=10784.69 00:32:11.457 clat percentiles (usec): 00:32:11.457 | 1.00th=[ 3097], 5.00th=[ 4293], 10.00th=[ 8291], 20.00th=[ 9634], 00:32:11.457 | 30.00th=[10159], 40.00th=[11469], 50.00th=[15270], 60.00th=[16712], 00:32:11.457 | 70.00th=[20579], 80.00th=[26346], 90.00th=[34866], 95.00th=[39584], 00:32:11.457 | 99.00th=[58459], 99.50th=[58459], 99.90th=[58459], 99.95th=[58459], 00:32:11.457 | 99.99th=[58459] 00:32:11.457 write: IOPS=3142, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1005msec); 0 zone resets 00:32:11.457 slat (nsec): min=1838, max=38574k, avg=168109.92, stdev=1375235.76 00:32:11.457 clat (usec): min=820, max=98511, avg=21606.25, stdev=18481.80 00:32:11.457 lat (usec): min=2790, max=98518, avg=21774.36, stdev=18588.69 00:32:11.457 clat percentiles (usec): 00:32:11.457 | 1.00th=[ 4555], 5.00th=[ 7242], 10.00th=[ 8717], 20.00th=[ 9765], 00:32:11.457 | 30.00th=[10028], 40.00th=[10290], 50.00th=[12649], 60.00th=[15795], 00:32:11.457 | 70.00th=[18220], 80.00th=[38011], 90.00th=[51643], 95.00th=[57934], 00:32:11.457 | 99.00th=[80217], 99.50th=[98042], 99.90th=[98042], 99.95th=[98042], 00:32:11.457 | 99.99th=[98042] 00:32:11.457 bw ( KiB/s): min= 8192, max=16448, per=19.40%, avg=12320.00, stdev=5837.87, samples=2 00:32:11.457 iops : min= 2048, max= 4112, avg=3080.00, stdev=1459.47, samples=2 00:32:11.457 lat (usec) : 1000=0.02% 00:32:11.457 lat (msec) : 2=0.26%, 4=1.35%, 10=23.71%, 20=44.93%, 50=23.74% 00:32:11.457 lat (msec) : 100=6.00% 00:32:11.457 cpu : usr=1.69%, sys=3.39%, ctx=246, majf=0, minf=1 00:32:11.457 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:32:11.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:11.457 issued rwts: total=3072,3158,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.457 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:11.457 job1: (groupid=0, jobs=1): err= 0: pid=3074985: Tue Nov 19 13:24:14 2024 00:32:11.457 read: IOPS=3017, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1018msec) 00:32:11.457 slat (nsec): min=1462, max=18528k, avg=145470.66, stdev=1032891.85 00:32:11.457 clat (msec): min=5, max=110, avg=15.78, stdev=10.66 00:32:11.457 lat (msec): min=5, max=110, avg=15.93, stdev=10.83 00:32:11.457 clat percentiles (msec): 00:32:11.457 | 1.00th=[ 8], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 11], 00:32:11.457 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 14], 00:32:11.457 | 70.00th=[ 17], 80.00th=[ 20], 90.00th=[ 23], 95.00th=[ 31], 00:32:11.457 | 99.00th=[ 67], 99.50th=[ 89], 99.90th=[ 111], 99.95th=[ 111], 00:32:11.457 | 99.99th=[ 111] 00:32:11.457 write: IOPS=3157, BW=12.3MiB/s (12.9MB/s)(12.6MiB/1018msec); 0 zone resets 00:32:11.457 slat (usec): min=2, max=15726, avg=161.52, stdev=872.33 00:32:11.457 clat (usec): min=1866, max=111183, avg=24950.06, stdev=21888.57 00:32:11.457 lat (usec): min=1876, max=111193, avg=25111.58, stdev=22003.21 00:32:11.457 clat percentiles (msec): 00:32:11.457 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 10], 00:32:11.457 | 30.00th=[ 11], 40.00th=[ 13], 50.00th=[ 18], 60.00th=[ 21], 00:32:11.457 | 70.00th=[ 30], 80.00th=[ 36], 90.00th=[ 57], 95.00th=[ 65], 00:32:11.457 | 99.00th=[ 110], 99.50th=[ 111], 99.90th=[ 112], 99.95th=[ 112], 00:32:11.457 | 99.99th=[ 112] 00:32:11.457 bw ( KiB/s): min= 8304, max=16384, per=19.44%, avg=12344.00, stdev=5713.42, samples=2 00:32:11.457 iops : min= 2076, max= 4096, avg=3086.00, stdev=1428.36, samples=2 00:32:11.457 lat (msec) : 2=0.32%, 4=0.10%, 10=13.19%, 20=54.28%, 50=24.96% 00:32:11.457 lat (msec) : 100=5.54%, 250=1.62% 00:32:11.457 cpu : usr=2.85%, sys=3.74%, ctx=284, majf=0, minf=1 00:32:11.457 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:32:11.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:11.457 issued rwts: total=3072,3214,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.457 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:11.457 job2: (groupid=0, jobs=1): err= 0: pid=3074993: Tue Nov 19 13:24:14 2024 00:32:11.457 read: IOPS=6603, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1008msec) 00:32:11.457 slat (nsec): min=1302, max=11806k, avg=76362.06, stdev=627014.49 00:32:11.457 clat (usec): min=2370, max=23928, avg=9842.58, stdev=3048.08 00:32:11.457 lat (usec): min=2377, max=23931, avg=9918.94, stdev=3093.53 00:32:11.457 clat percentiles (usec): 00:32:11.457 | 1.00th=[ 3720], 5.00th=[ 6063], 10.00th=[ 7046], 20.00th=[ 7898], 00:32:11.457 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9765], 00:32:11.457 | 70.00th=[10290], 80.00th=[11338], 90.00th=[14353], 95.00th=[16057], 00:32:11.457 | 99.00th=[19530], 99.50th=[21103], 99.90th=[23462], 99.95th=[23987], 00:32:11.457 | 99.99th=[23987] 00:32:11.457 write: IOPS=6941, BW=27.1MiB/s (28.4MB/s)(27.3MiB/1008msec); 0 zone resets 00:32:11.457 slat (usec): min=2, max=9477, avg=61.23, stdev=399.30 00:32:11.457 clat (usec): min=632, max=23923, avg=8872.77, stdev=2257.27 00:32:11.457 lat (usec): min=654, max=23925, avg=8934.00, stdev=2277.90 00:32:11.457 clat percentiles (usec): 00:32:11.457 | 1.00th=[ 2311], 5.00th=[ 5145], 10.00th=[ 6259], 20.00th=[ 7111], 00:32:11.457 | 30.00th=[ 8094], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9372], 00:32:11.457 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[11731], 95.00th=[12387], 00:32:11.457 | 99.00th=[15270], 99.50th=[15401], 99.90th=[17433], 99.95th=[20841], 00:32:11.457 | 99.99th=[23987] 00:32:11.457 bw ( KiB/s): min=26280, max=28672, per=43.27%, avg=27476.00, stdev=1691.40, samples=2 00:32:11.457 iops : min= 6570, max= 7168, avg=6869.00, stdev=422.85, samples=2 00:32:11.457 lat (usec) : 750=0.03% 00:32:11.457 lat (msec) : 2=0.15%, 4=1.46%, 10=69.80%, 20=28.07%, 50=0.48% 00:32:11.457 cpu : usr=4.67%, sys=6.16%, ctx=651, majf=0, minf=2 00:32:11.457 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:32:11.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:11.457 issued rwts: total=6656,6997,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.457 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:11.457 job3: (groupid=0, jobs=1): err= 0: pid=3074999: Tue Nov 19 13:24:14 2024 00:32:11.457 read: IOPS=2514, BW=9.82MiB/s (10.3MB/s)(10.0MiB/1018msec) 00:32:11.457 slat (nsec): min=1318, max=19164k, avg=143755.53, stdev=1028557.73 00:32:11.457 clat (usec): min=4856, max=49821, avg=15752.40, stdev=6469.04 00:32:11.457 lat (usec): min=4866, max=49826, avg=15896.16, stdev=6574.33 00:32:11.457 clat percentiles (usec): 00:32:11.457 | 1.00th=[ 7177], 5.00th=[11731], 10.00th=[11863], 20.00th=[11994], 00:32:11.457 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12518], 60.00th=[13566], 00:32:11.457 | 70.00th=[17695], 80.00th=[19006], 90.00th=[23725], 95.00th=[27657], 00:32:11.457 | 99.00th=[43254], 99.50th=[48497], 99.90th=[50070], 99.95th=[50070], 00:32:11.457 | 99.99th=[50070] 00:32:11.457 write: IOPS=2740, BW=10.7MiB/s (11.2MB/s)(10.9MiB/1018msec); 0 zone resets 00:32:11.457 slat (usec): min=2, max=15855, avg=222.13, stdev=1094.88 00:32:11.457 clat (usec): min=1432, max=111651, avg=31825.03, stdev=23868.32 00:32:11.457 lat (usec): min=1451, max=111662, avg=32047.16, stdev=24016.87 00:32:11.457 clat percentiles (msec): 00:32:11.457 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:32:11.457 | 30.00th=[ 14], 40.00th=[ 18], 50.00th=[ 23], 60.00th=[ 33], 00:32:11.457 | 70.00th=[ 37], 80.00th=[ 53], 90.00th=[ 69], 95.00th=[ 81], 00:32:11.457 | 99.00th=[ 103], 99.50th=[ 107], 99.90th=[ 112], 99.95th=[ 112], 00:32:11.457 | 99.99th=[ 112] 00:32:11.457 bw ( KiB/s): min= 9648, max=11648, per=16.77%, avg=10648.00, stdev=1414.21, samples=2 00:32:11.457 iops : min= 2412, max= 2912, avg=2662.00, stdev=353.55, samples=2 00:32:11.457 lat (msec) : 2=0.34%, 4=0.22%, 10=2.97%, 20=59.25%, 50=26.39% 00:32:11.457 lat (msec) : 100=9.87%, 250=0.95% 00:32:11.457 cpu : usr=2.06%, sys=3.24%, ctx=267, majf=0, minf=2 00:32:11.457 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:32:11.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:11.457 issued rwts: total=2560,2790,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.457 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:11.458 00:32:11.458 Run status group 0 (all jobs): 00:32:11.458 READ: bw=58.9MiB/s (61.8MB/s), 9.82MiB/s-25.8MiB/s (10.3MB/s-27.0MB/s), io=60.0MiB (62.9MB), run=1005-1018msec 00:32:11.458 WRITE: bw=62.0MiB/s (65.0MB/s), 10.7MiB/s-27.1MiB/s (11.2MB/s-28.4MB/s), io=63.1MiB (66.2MB), run=1005-1018msec 00:32:11.458 00:32:11.458 Disk stats (read/write): 00:32:11.458 nvme0n1: ios=1825/2048, merge=0/0, ticks=13117/17879, in_queue=30996, util=97.39% 00:32:11.458 nvme0n2: ios=2594/2887, merge=0/0, ticks=37343/59602, in_queue=96945, util=98.67% 00:32:11.458 nvme0n3: ios=5585/5635, merge=0/0, ticks=49278/45681, in_queue=94959, util=90.13% 00:32:11.458 nvme0n4: ios=1655/2048, merge=0/0, ticks=28234/71627, in_queue=99861, util=89.05% 00:32:11.458 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:11.458 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3075218 00:32:11.458 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:11.458 13:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:11.458 [global] 00:32:11.458 thread=1 00:32:11.458 invalidate=1 00:32:11.458 rw=read 00:32:11.458 time_based=1 00:32:11.458 runtime=10 00:32:11.458 ioengine=libaio 00:32:11.458 direct=1 00:32:11.458 bs=4096 00:32:11.458 iodepth=1 00:32:11.458 norandommap=1 00:32:11.458 numjobs=1 00:32:11.458 00:32:11.458 [job0] 00:32:11.458 filename=/dev/nvme0n1 00:32:11.458 [job1] 00:32:11.458 filename=/dev/nvme0n2 00:32:11.458 [job2] 00:32:11.458 filename=/dev/nvme0n3 00:32:11.458 [job3] 00:32:11.458 filename=/dev/nvme0n4 00:32:11.458 Could not set queue depth (nvme0n1) 00:32:11.458 Could not set queue depth (nvme0n2) 00:32:11.458 Could not set queue depth (nvme0n3) 00:32:11.458 Could not set queue depth (nvme0n4) 00:32:11.715 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:11.715 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:11.715 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:11.715 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:11.715 fio-3.35 00:32:11.715 Starting 4 threads 00:32:14.993 13:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:32:14.993 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=24334336, buflen=4096 00:32:14.993 fio: pid=3075412, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:14.993 13:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:32:14.993 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=48074752, buflen=4096 00:32:14.993 fio: pid=3075407, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:14.994 13:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:14.994 13:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:32:14.994 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11268096, buflen=4096 00:32:14.994 fio: pid=3075381, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:14.994 13:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:14.994 13:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:32:15.303 13:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:15.303 13:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:32:15.303 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=41254912, buflen=4096 00:32:15.303 fio: pid=3075392, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:15.303 00:32:15.303 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3075381: Tue Nov 19 13:24:18 2024 00:32:15.303 read: IOPS=867, BW=3470KiB/s (3553kB/s)(10.7MiB/3171msec) 00:32:15.303 slat (usec): min=3, max=11782, avg=13.96, stdev=249.37 00:32:15.303 clat (usec): min=176, max=41992, avg=1126.35, stdev=5854.68 00:32:15.303 lat (usec): min=183, max=46926, avg=1140.31, stdev=5874.02 00:32:15.303 clat percentiles (usec): 00:32:15.303 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 215], 20.00th=[ 225], 00:32:15.303 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 251], 00:32:15.303 | 70.00th=[ 260], 80.00th=[ 273], 90.00th=[ 314], 95.00th=[ 343], 00:32:15.303 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:32:15.303 | 99.99th=[42206] 00:32:15.303 bw ( KiB/s): min= 328, max= 7648, per=9.81%, avg=3523.00, stdev=3399.85, samples=6 00:32:15.303 iops : min= 82, max= 1912, avg=880.67, stdev=850.03, samples=6 00:32:15.303 lat (usec) : 250=59.01%, 500=38.52%, 750=0.11% 00:32:15.303 lat (msec) : 4=0.07%, 10=0.07%, 20=0.04%, 50=2.14% 00:32:15.303 cpu : usr=0.13%, sys=0.91%, ctx=2754, majf=0, minf=1 00:32:15.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.303 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.303 issued rwts: total=2752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:15.303 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3075392: Tue Nov 19 13:24:18 2024 00:32:15.303 read: IOPS=2963, BW=11.6MiB/s (12.1MB/s)(39.3MiB/3399msec) 00:32:15.303 slat (usec): min=6, max=17312, avg=12.44, stdev=254.96 00:32:15.303 clat (usec): min=171, max=41313, avg=321.56, stdev=1889.64 00:32:15.303 lat (usec): min=178, max=41320, avg=334.00, stdev=1907.28 00:32:15.303 clat percentiles (usec): 00:32:15.303 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 210], 20.00th=[ 219], 00:32:15.303 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 231], 00:32:15.303 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 253], 95.00th=[ 269], 00:32:15.303 | 99.00th=[ 338], 99.50th=[ 437], 99.90th=[41157], 99.95th=[41157], 00:32:15.303 | 99.99th=[41157] 00:32:15.303 bw ( KiB/s): min= 5776, max=17032, per=35.27%, avg=12659.67, stdev=4754.95, samples=6 00:32:15.303 iops : min= 1444, max= 4258, avg=3164.83, stdev=1188.66, samples=6 00:32:15.303 lat (usec) : 250=86.52%, 500=13.14%, 750=0.06%, 1000=0.01% 00:32:15.303 lat (msec) : 2=0.02%, 4=0.01%, 50=0.23% 00:32:15.303 cpu : usr=0.62%, sys=2.83%, ctx=10079, majf=0, minf=2 00:32:15.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.303 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.303 issued rwts: total=10073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:15.304 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3075407: Tue Nov 19 13:24:18 2024 00:32:15.304 read: IOPS=4006, BW=15.6MiB/s (16.4MB/s)(45.8MiB/2930msec) 00:32:15.304 slat (nsec): min=6328, max=33664, avg=8141.62, stdev=1627.86 00:32:15.304 clat (usec): min=172, max=847, avg=238.33, stdev=23.30 00:32:15.304 lat (usec): min=180, max=854, avg=246.47, stdev=23.67 00:32:15.304 clat percentiles (usec): 00:32:15.304 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 215], 20.00th=[ 227], 00:32:15.304 | 30.00th=[ 231], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 241], 00:32:15.304 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 260], 95.00th=[ 269], 00:32:15.304 | 99.00th=[ 297], 99.50th=[ 330], 99.90th=[ 474], 99.95th=[ 498], 00:32:15.304 | 99.99th=[ 693] 00:32:15.304 bw ( KiB/s): min=15568, max=17208, per=44.99%, avg=16150.40, stdev=652.46, samples=5 00:32:15.304 iops : min= 3892, max= 4302, avg=4037.60, stdev=163.12, samples=5 00:32:15.304 lat (usec) : 250=78.87%, 500=21.08%, 750=0.03%, 1000=0.01% 00:32:15.304 cpu : usr=1.30%, sys=3.82%, ctx=11739, majf=0, minf=2 00:32:15.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.304 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.304 issued rwts: total=11738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.304 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:15.304 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3075412: Tue Nov 19 13:24:18 2024 00:32:15.304 read: IOPS=2182, BW=8730KiB/s (8940kB/s)(23.2MiB/2722msec) 00:32:15.304 slat (nsec): min=6376, max=46310, avg=7858.82, stdev=1515.38 00:32:15.304 clat (usec): min=170, max=41117, avg=444.67, stdev=2633.95 00:32:15.304 lat (usec): min=178, max=41140, avg=452.53, stdev=2634.88 00:32:15.304 clat percentiles (usec): 00:32:15.304 | 1.00th=[ 194], 5.00th=[ 215], 10.00th=[ 225], 20.00th=[ 249], 00:32:15.304 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:32:15.304 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 314], 00:32:15.304 | 99.00th=[ 343], 99.50th=[ 429], 99.90th=[41157], 99.95th=[41157], 00:32:15.304 | 99.99th=[41157] 00:32:15.304 bw ( KiB/s): min= 96, max=15704, per=23.57%, avg=8462.40, stdev=7410.22, samples=5 00:32:15.304 iops : min= 24, max= 3926, avg=2115.60, stdev=1852.55, samples=5 00:32:15.304 lat (usec) : 250=20.87%, 500=78.69% 00:32:15.304 lat (msec) : 50=0.42% 00:32:15.304 cpu : usr=0.66%, sys=2.02%, ctx=5944, majf=0, minf=2 00:32:15.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.304 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.304 issued rwts: total=5942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.304 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:15.304 00:32:15.304 Run status group 0 (all jobs): 00:32:15.304 READ: bw=35.1MiB/s (36.8MB/s), 3470KiB/s-15.6MiB/s (3553kB/s-16.4MB/s), io=119MiB (125MB), run=2722-3399msec 00:32:15.304 00:32:15.304 Disk stats (read/write): 00:32:15.304 nvme0n1: ios=2671/0, merge=0/0, ticks=3040/0, in_queue=3040, util=95.22% 00:32:15.304 nvme0n2: ios=10108/0, merge=0/0, ticks=3839/0, in_queue=3839, util=97.33% 00:32:15.304 nvme0n3: ios=11517/0, merge=0/0, ticks=2702/0, in_queue=2702, util=96.52% 00:32:15.304 nvme0n4: ios=5678/0, merge=0/0, ticks=2920/0, in_queue=2920, util=98.93% 00:32:15.579 13:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:15.579 13:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:32:15.579 13:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:15.579 13:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:32:15.854 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:15.854 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:32:16.111 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:16.111 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:32:16.369 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:32:16.369 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3075218 00:32:16.369 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:32:16.369 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:16.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:16.369 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:16.369 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:32:16.369 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:16.369 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:16.369 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:16.369 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:16.369 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:32:16.369 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:32:16.369 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:32:16.369 nvmf hotplug test: fio failed as expected 00:32:16.369 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:16.627 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:32:16.627 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:32:16.627 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:32:16.627 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:32:16.627 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:32:16.627 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:16.627 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:32:16.627 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:16.627 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:32:16.627 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:16.627 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:16.627 rmmod nvme_tcp 00:32:16.627 rmmod nvme_fabrics 00:32:16.627 rmmod nvme_keyring 00:32:16.627 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:16.627 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:32:16.627 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:32:16.627 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3072354 ']' 00:32:16.627 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3072354 00:32:16.627 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3072354 ']' 00:32:16.627 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3072354 00:32:16.627 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:32:16.627 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:16.627 13:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3072354 00:32:16.886 13:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:16.886 13:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:16.886 13:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3072354' 00:32:16.886 killing process with pid 3072354 00:32:16.886 13:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3072354 00:32:16.886 13:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3072354 00:32:16.886 13:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:16.886 13:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:16.886 13:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:16.886 13:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:32:16.886 13:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:32:16.886 13:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:16.886 13:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:32:16.886 13:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:16.886 13:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:16.886 13:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.886 13:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:16.886 13:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.421 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:19.421 00:32:19.421 real 0m25.983s 00:32:19.421 user 1m30.285s 00:32:19.421 sys 0m11.682s 00:32:19.421 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:19.421 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:19.421 ************************************ 00:32:19.421 END TEST nvmf_fio_target 00:32:19.422 ************************************ 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:19.422 ************************************ 00:32:19.422 START TEST nvmf_bdevio 00:32:19.422 ************************************ 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:19.422 * Looking for test storage... 00:32:19.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:19.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.422 --rc genhtml_branch_coverage=1 00:32:19.422 --rc genhtml_function_coverage=1 00:32:19.422 --rc genhtml_legend=1 00:32:19.422 --rc geninfo_all_blocks=1 00:32:19.422 --rc geninfo_unexecuted_blocks=1 00:32:19.422 00:32:19.422 ' 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:19.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.422 --rc genhtml_branch_coverage=1 00:32:19.422 --rc genhtml_function_coverage=1 00:32:19.422 --rc genhtml_legend=1 00:32:19.422 --rc geninfo_all_blocks=1 00:32:19.422 --rc geninfo_unexecuted_blocks=1 00:32:19.422 00:32:19.422 ' 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:19.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.422 --rc genhtml_branch_coverage=1 00:32:19.422 --rc genhtml_function_coverage=1 00:32:19.422 --rc genhtml_legend=1 00:32:19.422 --rc geninfo_all_blocks=1 00:32:19.422 --rc geninfo_unexecuted_blocks=1 00:32:19.422 00:32:19.422 ' 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:19.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.422 --rc genhtml_branch_coverage=1 00:32:19.422 --rc genhtml_function_coverage=1 00:32:19.422 --rc genhtml_legend=1 00:32:19.422 --rc geninfo_all_blocks=1 00:32:19.422 --rc geninfo_unexecuted_blocks=1 00:32:19.422 00:32:19.422 ' 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.422 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:32:19.423 13:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:25.989 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:25.989 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:25.989 Found net devices under 0000:86:00.0: cvl_0_0 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:25.989 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:25.990 Found net devices under 0000:86:00.1: cvl_0_1 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:25.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:25.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:32:25.990 00:32:25.990 --- 10.0.0.2 ping statistics --- 00:32:25.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:25.990 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:25.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:25.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:32:25.990 00:32:25.990 --- 10.0.0.1 ping statistics --- 00:32:25.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:25.990 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3079810 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3079810 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3079810 ']' 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:25.990 [2024-11-19 13:24:28.468053] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:25.990 [2024-11-19 13:24:28.468996] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:32:25.990 [2024-11-19 13:24:28.469030] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:25.990 [2024-11-19 13:24:28.535060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:25.990 [2024-11-19 13:24:28.577164] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:25.990 [2024-11-19 13:24:28.577202] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:25.990 [2024-11-19 13:24:28.577209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:25.990 [2024-11-19 13:24:28.577215] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:25.990 [2024-11-19 13:24:28.577221] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:25.990 [2024-11-19 13:24:28.578693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:25.990 [2024-11-19 13:24:28.578806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:25.990 [2024-11-19 13:24:28.578711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:25.990 [2024-11-19 13:24:28.578807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:25.990 [2024-11-19 13:24:28.644681] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:25.990 [2024-11-19 13:24:28.644804] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:25.990 [2024-11-19 13:24:28.645489] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:25.990 [2024-11-19 13:24:28.645566] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:25.990 [2024-11-19 13:24:28.645686] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:25.990 [2024-11-19 13:24:28.715644] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:25.990 Malloc0 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:25.990 [2024-11-19 13:24:28.799818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:25.990 { 00:32:25.990 "params": { 00:32:25.990 "name": "Nvme$subsystem", 00:32:25.990 "trtype": "$TEST_TRANSPORT", 00:32:25.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:25.990 "adrfam": "ipv4", 00:32:25.990 "trsvcid": "$NVMF_PORT", 00:32:25.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:25.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:25.990 "hdgst": ${hdgst:-false}, 00:32:25.990 "ddgst": ${ddgst:-false} 00:32:25.990 }, 00:32:25.990 "method": "bdev_nvme_attach_controller" 00:32:25.990 } 00:32:25.990 EOF 00:32:25.990 )") 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:32:25.990 13:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:25.990 "params": { 00:32:25.990 "name": "Nvme1", 00:32:25.990 "trtype": "tcp", 00:32:25.990 "traddr": "10.0.0.2", 00:32:25.990 "adrfam": "ipv4", 00:32:25.990 "trsvcid": "4420", 00:32:25.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:25.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:25.991 "hdgst": false, 00:32:25.991 "ddgst": false 00:32:25.991 }, 00:32:25.991 "method": "bdev_nvme_attach_controller" 00:32:25.991 }' 00:32:25.991 [2024-11-19 13:24:28.852567] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:32:25.991 [2024-11-19 13:24:28.852614] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3079840 ] 00:32:25.991 [2024-11-19 13:24:28.927056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:25.991 [2024-11-19 13:24:28.970789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:25.991 [2024-11-19 13:24:28.970899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.991 [2024-11-19 13:24:28.970900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:25.991 I/O targets: 00:32:25.991 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:25.991 00:32:25.991 00:32:25.991 CUnit - A unit testing framework for C - Version 2.1-3 00:32:25.991 http://cunit.sourceforge.net/ 00:32:25.991 00:32:25.991 00:32:25.991 Suite: bdevio tests on: Nvme1n1 00:32:25.991 Test: blockdev write read block ...passed 00:32:25.991 Test: blockdev write zeroes read block ...passed 00:32:25.991 Test: blockdev write zeroes read no split ...passed 00:32:25.991 Test: blockdev write zeroes read split ...passed 00:32:26.248 Test: blockdev write zeroes read split partial ...passed 00:32:26.248 Test: blockdev reset ...[2024-11-19 13:24:29.399507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:26.248 [2024-11-19 13:24:29.399569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedc340 (9): Bad file descriptor 00:32:26.248 [2024-11-19 13:24:29.403260] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:26.248 passed 00:32:26.248 Test: blockdev write read 8 blocks ...passed 00:32:26.248 Test: blockdev write read size > 128k ...passed 00:32:26.248 Test: blockdev write read invalid size ...passed 00:32:26.248 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:26.248 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:26.248 Test: blockdev write read max offset ...passed 00:32:26.248 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:26.248 Test: blockdev writev readv 8 blocks ...passed 00:32:26.248 Test: blockdev writev readv 30 x 1block ...passed 00:32:26.506 Test: blockdev writev readv block ...passed 00:32:26.506 Test: blockdev writev readv size > 128k ...passed 00:32:26.506 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:26.506 Test: blockdev comparev and writev ...[2024-11-19 13:24:29.656289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:26.506 [2024-11-19 13:24:29.656318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.506 [2024-11-19 13:24:29.656332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:26.506 [2024-11-19 13:24:29.656339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:26.506 [2024-11-19 13:24:29.656634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:26.506 [2024-11-19 13:24:29.656646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:26.506 [2024-11-19 13:24:29.656658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:26.506 [2024-11-19 13:24:29.656666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:26.506 [2024-11-19 13:24:29.656959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:26.506 [2024-11-19 13:24:29.656970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:26.506 [2024-11-19 13:24:29.656982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:26.506 [2024-11-19 13:24:29.656994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:26.506 [2024-11-19 13:24:29.657277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:26.506 [2024-11-19 13:24:29.657288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:26.506 [2024-11-19 13:24:29.657300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:26.506 [2024-11-19 13:24:29.657307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:26.506 passed 00:32:26.506 Test: blockdev nvme passthru rw ...passed 00:32:26.506 Test: blockdev nvme passthru vendor specific ...[2024-11-19 13:24:29.739371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:26.506 [2024-11-19 13:24:29.739390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:26.506 [2024-11-19 13:24:29.739503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:26.506 [2024-11-19 13:24:29.739513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:26.506 [2024-11-19 13:24:29.739625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:26.506 [2024-11-19 13:24:29.739635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:26.506 [2024-11-19 13:24:29.739756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:26.506 [2024-11-19 13:24:29.739767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:26.506 passed 00:32:26.506 Test: blockdev nvme admin passthru ...passed 00:32:26.506 Test: blockdev copy ...passed 00:32:26.506 00:32:26.506 Run Summary: Type Total Ran Passed Failed Inactive 00:32:26.506 suites 1 1 n/a 0 0 00:32:26.506 tests 23 23 23 0 0 00:32:26.506 asserts 152 152 152 0 n/a 00:32:26.506 00:32:26.506 Elapsed time = 1.099 seconds 00:32:26.766 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:26.766 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.766 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:26.766 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.766 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:26.766 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:26.766 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:26.766 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:26.766 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:26.766 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:26.766 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:26.766 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:26.766 rmmod nvme_tcp 00:32:26.766 rmmod nvme_fabrics 00:32:26.766 rmmod nvme_keyring 00:32:26.766 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:26.766 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:26.766 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:26.766 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3079810 ']' 00:32:26.766 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3079810 00:32:26.766 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3079810 ']' 00:32:26.766 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3079810 00:32:26.766 13:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:32:26.766 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:26.766 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3079810 00:32:26.766 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:32:26.766 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:32:26.766 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3079810' 00:32:26.766 killing process with pid 3079810 00:32:26.766 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3079810 00:32:26.766 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3079810 00:32:27.025 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:27.025 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:27.025 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:27.025 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:32:27.025 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:32:27.025 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:27.025 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:32:27.025 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:27.025 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:27.025 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.025 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:27.025 13:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:28.928 13:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:29.186 00:32:29.186 real 0m9.978s 00:32:29.186 user 0m9.134s 00:32:29.186 sys 0m5.226s 00:32:29.186 13:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:29.186 13:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:29.186 ************************************ 00:32:29.186 END TEST nvmf_bdevio 00:32:29.186 ************************************ 00:32:29.186 13:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:29.186 00:32:29.186 real 4m33.220s 00:32:29.186 user 9m4.091s 00:32:29.186 sys 1m52.027s 00:32:29.186 13:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:29.186 13:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:29.186 ************************************ 00:32:29.186 END TEST nvmf_target_core_interrupt_mode 00:32:29.186 ************************************ 00:32:29.186 13:24:32 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:29.186 13:24:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:29.186 13:24:32 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:29.186 13:24:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:29.186 ************************************ 00:32:29.186 START TEST nvmf_interrupt 00:32:29.186 ************************************ 00:32:29.186 13:24:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:29.186 * Looking for test storage... 00:32:29.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:29.187 13:24:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:29.187 13:24:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:32:29.187 13:24:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:29.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.446 --rc genhtml_branch_coverage=1 00:32:29.446 --rc genhtml_function_coverage=1 00:32:29.446 --rc genhtml_legend=1 00:32:29.446 --rc geninfo_all_blocks=1 00:32:29.446 --rc geninfo_unexecuted_blocks=1 00:32:29.446 00:32:29.446 ' 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:29.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.446 --rc genhtml_branch_coverage=1 00:32:29.446 --rc genhtml_function_coverage=1 00:32:29.446 --rc genhtml_legend=1 00:32:29.446 --rc geninfo_all_blocks=1 00:32:29.446 --rc geninfo_unexecuted_blocks=1 00:32:29.446 00:32:29.446 ' 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:29.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.446 --rc genhtml_branch_coverage=1 00:32:29.446 --rc genhtml_function_coverage=1 00:32:29.446 --rc genhtml_legend=1 00:32:29.446 --rc geninfo_all_blocks=1 00:32:29.446 --rc geninfo_unexecuted_blocks=1 00:32:29.446 00:32:29.446 ' 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:29.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.446 --rc genhtml_branch_coverage=1 00:32:29.446 --rc genhtml_function_coverage=1 00:32:29.446 --rc genhtml_legend=1 00:32:29.446 --rc geninfo_all_blocks=1 00:32:29.446 --rc geninfo_unexecuted_blocks=1 00:32:29.446 00:32:29.446 ' 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.446 13:24:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:29.447 13:24:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:29.447 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:29.447 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:29.447 13:24:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:29.447 13:24:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:36.011 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:36.011 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:36.011 Found net devices under 0000:86:00.0: cvl_0_0 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:36.011 Found net devices under 0000:86:00.1: cvl_0_1 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:36.011 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:36.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:36.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:32:36.011 00:32:36.011 --- 10.0.0.2 ping statistics --- 00:32:36.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.012 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:36.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:36.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:32:36.012 00:32:36.012 --- 10.0.0.1 ping statistics --- 00:32:36.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.012 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3083505 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3083505 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3083505 ']' 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:36.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:36.012 [2024-11-19 13:24:38.497636] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:36.012 [2024-11-19 13:24:38.498652] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:32:36.012 [2024-11-19 13:24:38.498690] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:36.012 [2024-11-19 13:24:38.579312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:36.012 [2024-11-19 13:24:38.620959] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:36.012 [2024-11-19 13:24:38.620996] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:36.012 [2024-11-19 13:24:38.621003] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:36.012 [2024-11-19 13:24:38.621010] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:36.012 [2024-11-19 13:24:38.621015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:36.012 [2024-11-19 13:24:38.622188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.012 [2024-11-19 13:24:38.622189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:36.012 [2024-11-19 13:24:38.689893] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:36.012 [2024-11-19 13:24:38.690363] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:36.012 [2024-11-19 13:24:38.690640] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:36.012 5000+0 records in 00:32:36.012 5000+0 records out 00:32:36.012 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0181002 s, 566 MB/s 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:36.012 AIO0 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:36.012 [2024-11-19 13:24:38.819008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:36.012 [2024-11-19 13:24:38.859352] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3083505 0 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3083505 0 idle 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3083505 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3083505 -w 256 00:32:36.012 13:24:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3083505 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.26 reactor_0' 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3083505 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.26 reactor_0 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3083505 1 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3083505 1 idle 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3083505 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3083505 -w 256 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3083550 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1' 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3083550 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3083651 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3083505 0 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3083505 0 busy 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3083505 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3083505 -w 256 00:32:36.012 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:36.270 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3083505 root 20 0 128.2g 47616 34560 R 20.0 0.0 0:00.29 reactor_0' 00:32:36.270 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3083505 root 20 0 128.2g 47616 34560 R 20.0 0.0 0:00.29 reactor_0 00:32:36.270 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:36.270 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:36.270 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=20.0 00:32:36.270 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=20 00:32:36.270 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:36.270 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:36.270 13:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:32:37.203 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:32:37.203 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:37.203 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:37.203 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3083505 -w 256 00:32:37.462 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3083505 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:02.65 reactor_0' 00:32:37.462 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3083505 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:02.65 reactor_0 00:32:37.462 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:37.462 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:37.462 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:37.462 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:37.462 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:37.462 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:37.462 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:37.462 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:37.462 13:24:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:37.462 13:24:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:37.462 13:24:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3083505 1 00:32:37.462 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3083505 1 busy 00:32:37.463 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3083505 00:32:37.463 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:37.463 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:37.463 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:37.463 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:37.463 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:37.463 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:37.463 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:37.463 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:37.463 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3083505 -w 256 00:32:37.463 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:37.463 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3083550 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:01.37 reactor_1' 00:32:37.463 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3083550 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:01.37 reactor_1 00:32:37.463 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:37.463 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:37.463 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:37.463 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:37.463 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:37.463 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:37.463 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:37.463 13:24:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:37.463 13:24:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3083651 00:32:47.434 Initializing NVMe Controllers 00:32:47.434 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:47.434 Controller IO queue size 256, less than required. 00:32:47.434 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:47.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:47.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:47.434 Initialization complete. Launching workers. 00:32:47.434 ======================================================== 00:32:47.434 Latency(us) 00:32:47.434 Device Information : IOPS MiB/s Average min max 00:32:47.434 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16340.00 63.83 15676.02 3083.42 30872.01 00:32:47.434 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16483.00 64.39 15535.93 7038.97 28317.69 00:32:47.434 ======================================================== 00:32:47.434 Total : 32823.00 128.21 15605.67 3083.42 30872.01 00:32:47.434 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3083505 0 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3083505 0 idle 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3083505 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3083505 -w 256 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3083505 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.25 reactor_0' 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3083505 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.25 reactor_0 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3083505 1 00:32:47.434 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3083505 1 idle 00:32:47.435 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3083505 00:32:47.435 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:47.435 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:47.435 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:47.435 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:47.435 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:47.435 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:47.435 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:47.435 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:47.435 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:47.435 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3083505 -w 256 00:32:47.435 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:47.435 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3083550 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1' 00:32:47.435 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3083550 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1 00:32:47.435 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:47.435 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:47.435 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:47.435 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:47.435 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:47.435 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:47.435 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:47.435 13:24:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:47.435 13:24:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:47.435 13:24:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:47.435 13:24:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:47.435 13:24:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:47.435 13:24:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:47.435 13:24:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3083505 0 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3083505 0 idle 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3083505 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3083505 -w 256 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3083505 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.51 reactor_0' 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3083505 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.51 reactor_0 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3083505 1 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3083505 1 idle 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3083505 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3083505 -w 256 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3083550 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.10 reactor_1' 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3083550 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.10 reactor_1 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:49.341 13:24:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:49.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:49.601 13:24:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:49.601 13:24:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:49.601 13:24:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:49.601 13:24:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:49.601 13:24:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:49.601 13:24:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:49.601 13:24:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:49.601 13:24:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:49.601 13:24:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:49.602 13:24:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:49.602 13:24:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:49.602 13:24:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:49.602 13:24:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:49.602 13:24:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:49.602 13:24:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:49.602 rmmod nvme_tcp 00:32:49.602 rmmod nvme_fabrics 00:32:49.602 rmmod nvme_keyring 00:32:49.602 13:24:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:49.602 13:24:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:49.602 13:24:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:49.602 13:24:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3083505 ']' 00:32:49.602 13:24:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3083505 00:32:49.602 13:24:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3083505 ']' 00:32:49.602 13:24:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3083505 00:32:49.602 13:24:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:49.602 13:24:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:49.602 13:24:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3083505 00:32:49.602 13:24:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:49.602 13:24:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:49.602 13:24:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3083505' 00:32:49.602 killing process with pid 3083505 00:32:49.602 13:24:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3083505 00:32:49.602 13:24:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3083505 00:32:49.861 13:24:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:49.861 13:24:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:49.861 13:24:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:49.861 13:24:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:49.861 13:24:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:49.861 13:24:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:49.861 13:24:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:49.861 13:24:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:49.861 13:24:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:49.861 13:24:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.861 13:24:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:49.861 13:24:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:52.396 13:24:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:52.396 00:32:52.396 real 0m22.788s 00:32:52.396 user 0m39.780s 00:32:52.396 sys 0m8.344s 00:32:52.396 13:24:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:52.396 13:24:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:52.396 ************************************ 00:32:52.396 END TEST nvmf_interrupt 00:32:52.396 ************************************ 00:32:52.396 00:32:52.396 real 27m27.999s 00:32:52.396 user 56m31.321s 00:32:52.396 sys 9m20.921s 00:32:52.396 13:24:55 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:52.396 13:24:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:52.396 ************************************ 00:32:52.396 END TEST nvmf_tcp 00:32:52.396 ************************************ 00:32:52.396 13:24:55 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:52.396 13:24:55 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:52.396 13:24:55 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:52.396 13:24:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:52.396 13:24:55 -- common/autotest_common.sh@10 -- # set +x 00:32:52.396 ************************************ 00:32:52.396 START TEST spdkcli_nvmf_tcp 00:32:52.396 ************************************ 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:52.396 * Looking for test storage... 00:32:52.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:52.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.396 --rc genhtml_branch_coverage=1 00:32:52.396 --rc genhtml_function_coverage=1 00:32:52.396 --rc genhtml_legend=1 00:32:52.396 --rc geninfo_all_blocks=1 00:32:52.396 --rc geninfo_unexecuted_blocks=1 00:32:52.396 00:32:52.396 ' 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:52.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.396 --rc genhtml_branch_coverage=1 00:32:52.396 --rc genhtml_function_coverage=1 00:32:52.396 --rc genhtml_legend=1 00:32:52.396 --rc geninfo_all_blocks=1 00:32:52.396 --rc geninfo_unexecuted_blocks=1 00:32:52.396 00:32:52.396 ' 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:52.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.396 --rc genhtml_branch_coverage=1 00:32:52.396 --rc genhtml_function_coverage=1 00:32:52.396 --rc genhtml_legend=1 00:32:52.396 --rc geninfo_all_blocks=1 00:32:52.396 --rc geninfo_unexecuted_blocks=1 00:32:52.396 00:32:52.396 ' 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:52.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.396 --rc genhtml_branch_coverage=1 00:32:52.396 --rc genhtml_function_coverage=1 00:32:52.396 --rc genhtml_legend=1 00:32:52.396 --rc geninfo_all_blocks=1 00:32:52.396 --rc geninfo_unexecuted_blocks=1 00:32:52.396 00:32:52.396 ' 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:52.396 13:24:55 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:52.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3086344 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3086344 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3086344 ']' 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:52.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:52.397 13:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:52.397 [2024-11-19 13:24:55.588346] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:32:52.397 [2024-11-19 13:24:55.588395] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3086344 ] 00:32:52.397 [2024-11-19 13:24:55.661842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:52.397 [2024-11-19 13:24:55.705696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.397 [2024-11-19 13:24:55.705700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.655 13:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:52.655 13:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:52.655 13:24:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:52.655 13:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:52.655 13:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:52.655 13:24:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:52.655 13:24:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:52.655 13:24:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:52.655 13:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:52.655 13:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:52.655 13:24:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:52.655 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:52.655 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:52.655 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:52.655 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:52.655 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:52.655 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:52.655 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:52.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:52.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:52.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:52.655 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:52.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:52.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:52.655 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:52.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:52.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:52.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:52.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:52.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:52.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:52.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:52.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:52.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:52.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:52.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:52.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:52.655 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:52.655 ' 00:32:55.183 [2024-11-19 13:24:58.503125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:56.555 [2024-11-19 13:24:59.847693] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:59.082 [2024-11-19 13:25:02.339494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:01.608 [2024-11-19 13:25:04.494258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:02.980 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:02.980 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:02.980 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:02.980 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:02.980 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:02.980 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:02.980 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:02.980 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:02.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:02.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:02.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:02.981 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:02.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:02.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:02.981 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:02.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:02.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:02.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:02.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:02.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:02.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:02.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:02.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:02.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:02.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:02.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:02.981 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:02.981 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:02.981 13:25:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:02.981 13:25:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:02.981 13:25:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:02.981 13:25:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:02.981 13:25:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:02.981 13:25:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:02.981 13:25:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:02.981 13:25:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:03.547 13:25:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:03.547 13:25:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:03.547 13:25:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:03.547 13:25:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:03.547 13:25:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:03.547 13:25:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:03.547 13:25:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:03.547 13:25:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:03.547 13:25:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:03.547 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:03.547 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:03.547 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:03.547 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:03.547 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:03.547 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:03.547 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:03.547 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:03.547 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:03.547 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:03.547 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:03.547 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:03.547 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:03.547 ' 00:33:10.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:10.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:10.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:10.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:10.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:10.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:10.105 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:10.105 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:10.105 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:10.105 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:10.105 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:10.105 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:10.105 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:10.105 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3086344 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3086344 ']' 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3086344 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3086344 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3086344' 00:33:10.105 killing process with pid 3086344 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3086344 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3086344 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3086344 ']' 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3086344 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3086344 ']' 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3086344 00:33:10.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3086344) - No such process 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3086344 is not found' 00:33:10.105 Process with pid 3086344 is not found 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:10.105 00:33:10.105 real 0m17.298s 00:33:10.105 user 0m38.164s 00:33:10.105 sys 0m0.778s 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:10.105 13:25:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:10.105 ************************************ 00:33:10.105 END TEST spdkcli_nvmf_tcp 00:33:10.105 ************************************ 00:33:10.105 13:25:12 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:10.105 13:25:12 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:10.105 13:25:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:10.105 13:25:12 -- common/autotest_common.sh@10 -- # set +x 00:33:10.105 ************************************ 00:33:10.105 START TEST nvmf_identify_passthru 00:33:10.105 ************************************ 00:33:10.105 13:25:12 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:10.105 * Looking for test storage... 00:33:10.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:10.106 13:25:12 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:10.106 13:25:12 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:33:10.106 13:25:12 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:10.106 13:25:12 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:10.106 13:25:12 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:10.106 13:25:12 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:10.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.106 --rc genhtml_branch_coverage=1 00:33:10.106 --rc genhtml_function_coverage=1 00:33:10.106 --rc genhtml_legend=1 00:33:10.106 --rc geninfo_all_blocks=1 00:33:10.106 --rc geninfo_unexecuted_blocks=1 00:33:10.106 00:33:10.106 ' 00:33:10.106 13:25:12 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:10.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.106 --rc genhtml_branch_coverage=1 00:33:10.106 --rc genhtml_function_coverage=1 00:33:10.106 --rc genhtml_legend=1 00:33:10.106 --rc geninfo_all_blocks=1 00:33:10.106 --rc geninfo_unexecuted_blocks=1 00:33:10.106 00:33:10.106 ' 00:33:10.106 13:25:12 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:10.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.106 --rc genhtml_branch_coverage=1 00:33:10.106 --rc genhtml_function_coverage=1 00:33:10.106 --rc genhtml_legend=1 00:33:10.106 --rc geninfo_all_blocks=1 00:33:10.106 --rc geninfo_unexecuted_blocks=1 00:33:10.106 00:33:10.106 ' 00:33:10.106 13:25:12 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:10.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.106 --rc genhtml_branch_coverage=1 00:33:10.106 --rc genhtml_function_coverage=1 00:33:10.106 --rc genhtml_legend=1 00:33:10.106 --rc geninfo_all_blocks=1 00:33:10.106 --rc geninfo_unexecuted_blocks=1 00:33:10.106 00:33:10.106 ' 00:33:10.106 13:25:12 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:10.106 13:25:12 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.106 13:25:12 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.106 13:25:12 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.106 13:25:12 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:10.106 13:25:12 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:10.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:10.106 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:10.106 13:25:12 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:10.106 13:25:12 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:10.106 13:25:12 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.106 13:25:12 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.106 13:25:12 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.107 13:25:12 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:10.107 13:25:12 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.107 13:25:12 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:10.107 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:10.107 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:10.107 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:10.107 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:10.107 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:10.107 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.107 13:25:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:10.107 13:25:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:10.107 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:10.107 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:10.107 13:25:12 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:33:10.107 13:25:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:15.485 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:15.485 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:15.485 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:15.486 Found net devices under 0000:86:00.0: cvl_0_0 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:15.486 Found net devices under 0000:86:00.1: cvl_0_1 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:15.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:15.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.420 ms 00:33:15.486 00:33:15.486 --- 10.0.0.2 ping statistics --- 00:33:15.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:15.486 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:15.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:15.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:33:15.486 00:33:15.486 --- 10.0.0.1 ping statistics --- 00:33:15.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:15.486 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:15.486 13:25:18 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:15.486 13:25:18 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:15.486 13:25:18 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:15.486 13:25:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:15.486 13:25:18 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:15.486 13:25:18 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:15.486 13:25:18 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:33:15.486 13:25:18 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:15.486 13:25:18 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:15.486 13:25:18 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:15.486 13:25:18 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:33:15.486 13:25:18 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:15.486 13:25:18 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:15.486 13:25:18 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:15.745 13:25:18 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:15.745 13:25:18 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:33:15.745 13:25:18 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:33:15.745 13:25:18 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:33:15.745 13:25:18 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:33:15.745 13:25:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:15.745 13:25:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:15.746 13:25:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:19.940 13:25:23 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:33:19.940 13:25:23 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:19.940 13:25:23 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:19.940 13:25:23 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:24.129 13:25:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:24.129 13:25:27 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:24.129 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:24.129 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:24.129 13:25:27 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:24.129 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:24.129 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:24.129 13:25:27 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3093597 00:33:24.129 13:25:27 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:24.129 13:25:27 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:24.129 13:25:27 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3093597 00:33:24.129 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3093597 ']' 00:33:24.129 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:24.129 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:24.129 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:24.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:24.129 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:24.129 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:24.129 [2024-11-19 13:25:27.235575] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:33:24.129 [2024-11-19 13:25:27.235622] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:24.129 [2024-11-19 13:25:27.312481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:24.129 [2024-11-19 13:25:27.356229] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:24.129 [2024-11-19 13:25:27.356269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:24.129 [2024-11-19 13:25:27.356276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:24.129 [2024-11-19 13:25:27.356283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:24.129 [2024-11-19 13:25:27.356289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:24.129 [2024-11-19 13:25:27.357725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:24.129 [2024-11-19 13:25:27.357838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:24.129 [2024-11-19 13:25:27.357925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:24.129 [2024-11-19 13:25:27.357926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.129 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:24.129 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:33:24.129 13:25:27 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:24.129 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.129 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:24.129 INFO: Log level set to 20 00:33:24.129 INFO: Requests: 00:33:24.129 { 00:33:24.129 "jsonrpc": "2.0", 00:33:24.129 "method": "nvmf_set_config", 00:33:24.129 "id": 1, 00:33:24.129 "params": { 00:33:24.129 "admin_cmd_passthru": { 00:33:24.129 "identify_ctrlr": true 00:33:24.129 } 00:33:24.129 } 00:33:24.129 } 00:33:24.129 00:33:24.129 INFO: response: 00:33:24.129 { 00:33:24.129 "jsonrpc": "2.0", 00:33:24.129 "id": 1, 00:33:24.129 "result": true 00:33:24.129 } 00:33:24.129 00:33:24.129 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.129 13:25:27 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:24.129 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.129 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:24.129 INFO: Setting log level to 20 00:33:24.129 INFO: Setting log level to 20 00:33:24.129 INFO: Log level set to 20 00:33:24.129 INFO: Log level set to 20 00:33:24.129 INFO: Requests: 00:33:24.129 { 00:33:24.129 "jsonrpc": "2.0", 00:33:24.129 "method": "framework_start_init", 00:33:24.129 "id": 1 00:33:24.129 } 00:33:24.129 00:33:24.129 INFO: Requests: 00:33:24.129 { 00:33:24.129 "jsonrpc": "2.0", 00:33:24.129 "method": "framework_start_init", 00:33:24.129 "id": 1 00:33:24.129 } 00:33:24.129 00:33:24.129 [2024-11-19 13:25:27.469985] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:24.129 INFO: response: 00:33:24.129 { 00:33:24.129 "jsonrpc": "2.0", 00:33:24.129 "id": 1, 00:33:24.129 "result": true 00:33:24.129 } 00:33:24.129 00:33:24.129 INFO: response: 00:33:24.129 { 00:33:24.129 "jsonrpc": "2.0", 00:33:24.129 "id": 1, 00:33:24.129 "result": true 00:33:24.129 } 00:33:24.129 00:33:24.129 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.129 13:25:27 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:24.129 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.129 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:24.129 INFO: Setting log level to 40 00:33:24.129 INFO: Setting log level to 40 00:33:24.129 INFO: Setting log level to 40 00:33:24.129 [2024-11-19 13:25:27.483318] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:24.129 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.129 13:25:27 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:24.129 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:24.129 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:24.387 13:25:27 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:33:24.387 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.387 13:25:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:27.669 Nvme0n1 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.669 13:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.669 13:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.669 13:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:27.669 [2024-11-19 13:25:30.389162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.669 13:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:27.669 [ 00:33:27.669 { 00:33:27.669 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:27.669 "subtype": "Discovery", 00:33:27.669 "listen_addresses": [], 00:33:27.669 "allow_any_host": true, 00:33:27.669 "hosts": [] 00:33:27.669 }, 00:33:27.669 { 00:33:27.669 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:27.669 "subtype": "NVMe", 00:33:27.669 "listen_addresses": [ 00:33:27.669 { 00:33:27.669 "trtype": "TCP", 00:33:27.669 "adrfam": "IPv4", 00:33:27.669 "traddr": "10.0.0.2", 00:33:27.669 "trsvcid": "4420" 00:33:27.669 } 00:33:27.669 ], 00:33:27.669 "allow_any_host": true, 00:33:27.669 "hosts": [], 00:33:27.669 "serial_number": "SPDK00000000000001", 00:33:27.669 "model_number": "SPDK bdev Controller", 00:33:27.669 "max_namespaces": 1, 00:33:27.669 "min_cntlid": 1, 00:33:27.669 "max_cntlid": 65519, 00:33:27.669 "namespaces": [ 00:33:27.669 { 00:33:27.669 "nsid": 1, 00:33:27.669 "bdev_name": "Nvme0n1", 00:33:27.669 "name": "Nvme0n1", 00:33:27.669 "nguid": "2524F9B5CF3A4644AFE1ABF4D3791C40", 00:33:27.669 "uuid": "2524f9b5-cf3a-4644-afe1-abf4d3791c40" 00:33:27.669 } 00:33:27.669 ] 00:33:27.669 } 00:33:27.669 ] 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.669 13:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:27.669 13:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:27.669 13:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:27.669 13:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:33:27.669 13:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:27.669 13:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:27.669 13:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:27.669 13:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:27.669 13:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:33:27.669 13:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:27.669 13:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.669 13:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:27.669 13:25:30 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:27.669 13:25:30 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:27.669 13:25:30 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:33:27.669 13:25:30 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:27.669 13:25:30 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:33:27.669 13:25:30 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:27.669 13:25:30 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:27.669 rmmod nvme_tcp 00:33:27.669 rmmod nvme_fabrics 00:33:27.669 rmmod nvme_keyring 00:33:27.669 13:25:30 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:27.669 13:25:30 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:33:27.669 13:25:30 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:33:27.669 13:25:30 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3093597 ']' 00:33:27.669 13:25:30 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3093597 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3093597 ']' 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3093597 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3093597 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3093597' 00:33:27.669 killing process with pid 3093597 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3093597 00:33:27.669 13:25:30 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3093597 00:33:29.044 13:25:32 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:29.044 13:25:32 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:29.044 13:25:32 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:29.044 13:25:32 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:29.044 13:25:32 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:33:29.044 13:25:32 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:29.044 13:25:32 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:33:29.044 13:25:32 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:29.044 13:25:32 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:29.044 13:25:32 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:29.044 13:25:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:29.044 13:25:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.581 13:25:34 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:31.581 00:33:31.581 real 0m21.745s 00:33:31.581 user 0m26.661s 00:33:31.581 sys 0m6.156s 00:33:31.581 13:25:34 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:31.581 13:25:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:31.581 ************************************ 00:33:31.581 END TEST nvmf_identify_passthru 00:33:31.581 ************************************ 00:33:31.581 13:25:34 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:31.581 13:25:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:31.581 13:25:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:31.581 13:25:34 -- common/autotest_common.sh@10 -- # set +x 00:33:31.581 ************************************ 00:33:31.581 START TEST nvmf_dif 00:33:31.581 ************************************ 00:33:31.581 13:25:34 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:31.581 * Looking for test storage... 00:33:31.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:31.581 13:25:34 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:31.581 13:25:34 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:33:31.581 13:25:34 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:31.581 13:25:34 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:31.581 13:25:34 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:31.581 13:25:34 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:31.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.581 --rc genhtml_branch_coverage=1 00:33:31.581 --rc genhtml_function_coverage=1 00:33:31.581 --rc genhtml_legend=1 00:33:31.581 --rc geninfo_all_blocks=1 00:33:31.581 --rc geninfo_unexecuted_blocks=1 00:33:31.581 00:33:31.581 ' 00:33:31.581 13:25:34 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:31.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.581 --rc genhtml_branch_coverage=1 00:33:31.581 --rc genhtml_function_coverage=1 00:33:31.581 --rc genhtml_legend=1 00:33:31.581 --rc geninfo_all_blocks=1 00:33:31.581 --rc geninfo_unexecuted_blocks=1 00:33:31.581 00:33:31.581 ' 00:33:31.581 13:25:34 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:31.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.581 --rc genhtml_branch_coverage=1 00:33:31.581 --rc genhtml_function_coverage=1 00:33:31.581 --rc genhtml_legend=1 00:33:31.581 --rc geninfo_all_blocks=1 00:33:31.581 --rc geninfo_unexecuted_blocks=1 00:33:31.581 00:33:31.581 ' 00:33:31.581 13:25:34 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:31.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.581 --rc genhtml_branch_coverage=1 00:33:31.581 --rc genhtml_function_coverage=1 00:33:31.581 --rc genhtml_legend=1 00:33:31.581 --rc geninfo_all_blocks=1 00:33:31.581 --rc geninfo_unexecuted_blocks=1 00:33:31.581 00:33:31.581 ' 00:33:31.581 13:25:34 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:31.581 13:25:34 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:31.581 13:25:34 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.581 13:25:34 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.581 13:25:34 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.581 13:25:34 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:31.581 13:25:34 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:31.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:31.581 13:25:34 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:31.581 13:25:34 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:31.581 13:25:34 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:31.581 13:25:34 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:31.581 13:25:34 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:31.581 13:25:34 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:31.582 13:25:34 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:31.582 13:25:34 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:31.582 13:25:34 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.582 13:25:34 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:31.582 13:25:34 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.582 13:25:34 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:31.582 13:25:34 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:31.582 13:25:34 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:31.582 13:25:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:38.315 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:38.315 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:38.315 Found net devices under 0000:86:00.0: cvl_0_0 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:38.315 13:25:40 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:38.316 Found net devices under 0000:86:00.1: cvl_0_1 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:38.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:38.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:33:38.316 00:33:38.316 --- 10.0.0.2 ping statistics --- 00:33:38.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.316 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:38.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:38.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:33:38.316 00:33:38.316 --- 10.0.0.1 ping statistics --- 00:33:38.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.316 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:38.316 13:25:40 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:40.221 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:40.221 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:40.221 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:40.221 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:40.221 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:40.221 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:40.221 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:40.221 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:40.221 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:40.221 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:40.221 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:40.221 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:40.221 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:40.221 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:40.221 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:40.221 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:40.221 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:40.221 13:25:43 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:40.221 13:25:43 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:40.221 13:25:43 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:40.222 13:25:43 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:40.222 13:25:43 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:40.222 13:25:43 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:40.222 13:25:43 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:40.222 13:25:43 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:40.222 13:25:43 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:40.222 13:25:43 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:40.222 13:25:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:40.222 13:25:43 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3099067 00:33:40.222 13:25:43 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3099067 00:33:40.222 13:25:43 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:40.222 13:25:43 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3099067 ']' 00:33:40.222 13:25:43 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:40.222 13:25:43 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:40.222 13:25:43 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:40.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:40.222 13:25:43 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:40.222 13:25:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:40.222 [2024-11-19 13:25:43.496962] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:33:40.222 [2024-11-19 13:25:43.497011] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:40.222 [2024-11-19 13:25:43.558643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.480 [2024-11-19 13:25:43.600536] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:40.480 [2024-11-19 13:25:43.600569] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:40.480 [2024-11-19 13:25:43.600576] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:40.480 [2024-11-19 13:25:43.600582] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:40.480 [2024-11-19 13:25:43.600587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:40.480 [2024-11-19 13:25:43.601134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.480 13:25:43 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:40.480 13:25:43 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:40.480 13:25:43 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:40.480 13:25:43 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:40.480 13:25:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:40.480 13:25:43 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:40.480 13:25:43 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:40.480 13:25:43 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:40.480 13:25:43 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.480 13:25:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:40.480 [2024-11-19 13:25:43.732427] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:40.480 13:25:43 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.480 13:25:43 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:40.480 13:25:43 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:40.480 13:25:43 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:40.480 13:25:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:40.480 ************************************ 00:33:40.481 START TEST fio_dif_1_default 00:33:40.481 ************************************ 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:40.481 bdev_null0 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:40.481 [2024-11-19 13:25:43.800733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:40.481 { 00:33:40.481 "params": { 00:33:40.481 "name": "Nvme$subsystem", 00:33:40.481 "trtype": "$TEST_TRANSPORT", 00:33:40.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:40.481 "adrfam": "ipv4", 00:33:40.481 "trsvcid": "$NVMF_PORT", 00:33:40.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:40.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:40.481 "hdgst": ${hdgst:-false}, 00:33:40.481 "ddgst": ${ddgst:-false} 00:33:40.481 }, 00:33:40.481 "method": "bdev_nvme_attach_controller" 00:33:40.481 } 00:33:40.481 EOF 00:33:40.481 )") 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:40.481 "params": { 00:33:40.481 "name": "Nvme0", 00:33:40.481 "trtype": "tcp", 00:33:40.481 "traddr": "10.0.0.2", 00:33:40.481 "adrfam": "ipv4", 00:33:40.481 "trsvcid": "4420", 00:33:40.481 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:40.481 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:40.481 "hdgst": false, 00:33:40.481 "ddgst": false 00:33:40.481 }, 00:33:40.481 "method": "bdev_nvme_attach_controller" 00:33:40.481 }' 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:40.481 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:40.766 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:40.766 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:40.766 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:40.766 13:25:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:41.027 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:41.027 fio-3.35 00:33:41.027 Starting 1 thread 00:33:53.237 00:33:53.237 filename0: (groupid=0, jobs=1): err= 0: pid=3099444: Tue Nov 19 13:25:54 2024 00:33:53.237 read: IOPS=189, BW=759KiB/s (777kB/s)(7616KiB/10032msec) 00:33:53.237 slat (nsec): min=5866, max=44276, avg=6279.18, stdev=1043.78 00:33:53.237 clat (usec): min=392, max=45258, avg=21056.85, stdev=20500.54 00:33:53.237 lat (usec): min=398, max=45303, avg=21063.13, stdev=20500.52 00:33:53.237 clat percentiles (usec): 00:33:53.237 | 1.00th=[ 437], 5.00th=[ 469], 10.00th=[ 482], 20.00th=[ 586], 00:33:53.237 | 30.00th=[ 603], 40.00th=[ 611], 50.00th=[ 660], 60.00th=[41157], 00:33:53.237 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:33:53.237 | 99.00th=[42730], 99.50th=[42730], 99.90th=[45351], 99.95th=[45351], 00:33:53.237 | 99.99th=[45351] 00:33:53.237 bw ( KiB/s): min= 670, max= 832, per=99.98%, avg=759.90, stdev=31.23, samples=20 00:33:53.237 iops : min= 167, max= 208, avg=189.95, stdev= 7.88, samples=20 00:33:53.237 lat (usec) : 500=13.50%, 750=36.50% 00:33:53.237 lat (msec) : 50=50.00% 00:33:53.237 cpu : usr=92.27%, sys=7.47%, ctx=24, majf=0, minf=0 00:33:53.237 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:53.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.237 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.237 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:53.237 00:33:53.237 Run status group 0 (all jobs): 00:33:53.237 READ: bw=759KiB/s (777kB/s), 759KiB/s-759KiB/s (777kB/s-777kB/s), io=7616KiB (7799kB), run=10032-10032msec 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.237 00:33:53.237 real 0m11.282s 00:33:53.237 user 0m15.761s 00:33:53.237 sys 0m1.050s 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:53.237 ************************************ 00:33:53.237 END TEST fio_dif_1_default 00:33:53.237 ************************************ 00:33:53.237 13:25:55 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:53.237 13:25:55 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:53.237 13:25:55 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:53.237 13:25:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:53.237 ************************************ 00:33:53.237 START TEST fio_dif_1_multi_subsystems 00:33:53.237 ************************************ 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:53.237 bdev_null0 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:53.237 [2024-11-19 13:25:55.164329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:53.237 bdev_null1 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:53.237 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:53.238 { 00:33:53.238 "params": { 00:33:53.238 "name": "Nvme$subsystem", 00:33:53.238 "trtype": "$TEST_TRANSPORT", 00:33:53.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:53.238 "adrfam": "ipv4", 00:33:53.238 "trsvcid": "$NVMF_PORT", 00:33:53.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:53.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:53.238 "hdgst": ${hdgst:-false}, 00:33:53.238 "ddgst": ${ddgst:-false} 00:33:53.238 }, 00:33:53.238 "method": "bdev_nvme_attach_controller" 00:33:53.238 } 00:33:53.238 EOF 00:33:53.238 )") 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:53.238 { 00:33:53.238 "params": { 00:33:53.238 "name": "Nvme$subsystem", 00:33:53.238 "trtype": "$TEST_TRANSPORT", 00:33:53.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:53.238 "adrfam": "ipv4", 00:33:53.238 "trsvcid": "$NVMF_PORT", 00:33:53.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:53.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:53.238 "hdgst": ${hdgst:-false}, 00:33:53.238 "ddgst": ${ddgst:-false} 00:33:53.238 }, 00:33:53.238 "method": "bdev_nvme_attach_controller" 00:33:53.238 } 00:33:53.238 EOF 00:33:53.238 )") 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:53.238 "params": { 00:33:53.238 "name": "Nvme0", 00:33:53.238 "trtype": "tcp", 00:33:53.238 "traddr": "10.0.0.2", 00:33:53.238 "adrfam": "ipv4", 00:33:53.238 "trsvcid": "4420", 00:33:53.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:53.238 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:53.238 "hdgst": false, 00:33:53.238 "ddgst": false 00:33:53.238 }, 00:33:53.238 "method": "bdev_nvme_attach_controller" 00:33:53.238 },{ 00:33:53.238 "params": { 00:33:53.238 "name": "Nvme1", 00:33:53.238 "trtype": "tcp", 00:33:53.238 "traddr": "10.0.0.2", 00:33:53.238 "adrfam": "ipv4", 00:33:53.238 "trsvcid": "4420", 00:33:53.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:53.238 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:53.238 "hdgst": false, 00:33:53.238 "ddgst": false 00:33:53.238 }, 00:33:53.238 "method": "bdev_nvme_attach_controller" 00:33:53.238 }' 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:53.238 13:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:53.238 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:53.238 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:53.238 fio-3.35 00:33:53.238 Starting 2 threads 00:34:03.218 00:34:03.218 filename0: (groupid=0, jobs=1): err= 0: pid=3101406: Tue Nov 19 13:26:06 2024 00:34:03.218 read: IOPS=225, BW=903KiB/s (925kB/s)(9040KiB/10008msec) 00:34:03.218 slat (nsec): min=5935, max=66042, avg=8527.72, stdev=5054.67 00:34:03.218 clat (usec): min=379, max=42610, avg=17687.38, stdev=20285.94 00:34:03.218 lat (usec): min=385, max=42617, avg=17695.90, stdev=20284.60 00:34:03.218 clat percentiles (usec): 00:34:03.218 | 1.00th=[ 400], 5.00th=[ 412], 10.00th=[ 420], 20.00th=[ 429], 00:34:03.218 | 30.00th=[ 437], 40.00th=[ 445], 50.00th=[ 478], 60.00th=[40633], 00:34:03.218 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:34:03.218 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:03.218 | 99.99th=[42730] 00:34:03.218 bw ( KiB/s): min= 704, max= 1088, per=51.01%, avg=902.40, stdev=111.63, samples=20 00:34:03.218 iops : min= 176, max= 272, avg=225.60, stdev=27.91, samples=20 00:34:03.218 lat (usec) : 500=53.05%, 750=5.00% 00:34:03.218 lat (msec) : 50=41.95% 00:34:03.218 cpu : usr=98.21%, sys=1.49%, ctx=32, majf=0, minf=90 00:34:03.218 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.218 issued rwts: total=2260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.218 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:03.218 filename1: (groupid=0, jobs=1): err= 0: pid=3101407: Tue Nov 19 13:26:06 2024 00:34:03.218 read: IOPS=216, BW=865KiB/s (886kB/s)(8656KiB/10002msec) 00:34:03.218 slat (nsec): min=6146, max=38531, avg=9167.77, stdev=6169.58 00:34:03.218 clat (usec): min=368, max=42656, avg=18459.74, stdev=20411.27 00:34:03.218 lat (usec): min=375, max=42663, avg=18468.91, stdev=20409.57 00:34:03.218 clat percentiles (usec): 00:34:03.218 | 1.00th=[ 392], 5.00th=[ 408], 10.00th=[ 416], 20.00th=[ 429], 00:34:03.218 | 30.00th=[ 433], 40.00th=[ 445], 50.00th=[ 469], 60.00th=[40633], 00:34:03.218 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42730], 00:34:03.218 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:03.218 | 99.99th=[42730] 00:34:03.218 bw ( KiB/s): min= 768, max= 1024, per=49.32%, avg=872.42, stdev=91.04, samples=19 00:34:03.218 iops : min= 192, max= 256, avg=218.11, stdev=22.76, samples=19 00:34:03.218 lat (usec) : 500=53.42%, 750=2.59% 00:34:03.218 lat (msec) : 2=0.18%, 50=43.81% 00:34:03.218 cpu : usr=98.60%, sys=1.13%, ctx=9, majf=0, minf=129 00:34:03.218 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.218 issued rwts: total=2164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.218 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:03.218 00:34:03.218 Run status group 0 (all jobs): 00:34:03.218 READ: bw=1768KiB/s (1811kB/s), 865KiB/s-903KiB/s (886kB/s-925kB/s), io=17.3MiB (18.1MB), run=10002-10008msec 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.218 00:34:03.218 real 0m11.364s 00:34:03.218 user 0m26.789s 00:34:03.218 sys 0m0.635s 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:03.218 13:26:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:03.218 ************************************ 00:34:03.218 END TEST fio_dif_1_multi_subsystems 00:34:03.218 ************************************ 00:34:03.218 13:26:06 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:03.218 13:26:06 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:03.218 13:26:06 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:03.218 13:26:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:03.218 ************************************ 00:34:03.218 START TEST fio_dif_rand_params 00:34:03.218 ************************************ 00:34:03.218 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:34:03.218 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:03.218 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:03.218 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:03.218 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:03.218 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:03.218 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:03.218 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:03.218 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:03.218 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:03.219 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:03.219 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:03.219 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:03.219 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:03.219 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.219 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:03.219 bdev_null0 00:34:03.219 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.219 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:03.219 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.219 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:03.219 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.219 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:03.219 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.219 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:03.478 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.478 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:03.478 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.478 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:03.478 [2024-11-19 13:26:06.596832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:03.478 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.478 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:03.478 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:03.478 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:03.478 13:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:03.478 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:03.479 { 00:34:03.479 "params": { 00:34:03.479 "name": "Nvme$subsystem", 00:34:03.479 "trtype": "$TEST_TRANSPORT", 00:34:03.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:03.479 "adrfam": "ipv4", 00:34:03.479 "trsvcid": "$NVMF_PORT", 00:34:03.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:03.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:03.479 "hdgst": ${hdgst:-false}, 00:34:03.479 "ddgst": ${ddgst:-false} 00:34:03.479 }, 00:34:03.479 "method": "bdev_nvme_attach_controller" 00:34:03.479 } 00:34:03.479 EOF 00:34:03.479 )") 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:03.479 "params": { 00:34:03.479 "name": "Nvme0", 00:34:03.479 "trtype": "tcp", 00:34:03.479 "traddr": "10.0.0.2", 00:34:03.479 "adrfam": "ipv4", 00:34:03.479 "trsvcid": "4420", 00:34:03.479 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:03.479 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:03.479 "hdgst": false, 00:34:03.479 "ddgst": false 00:34:03.479 }, 00:34:03.479 "method": "bdev_nvme_attach_controller" 00:34:03.479 }' 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:03.479 13:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:03.738 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:03.738 ... 00:34:03.738 fio-3.35 00:34:03.738 Starting 3 threads 00:34:10.308 00:34:10.308 filename0: (groupid=0, jobs=1): err= 0: pid=3103367: Tue Nov 19 13:26:12 2024 00:34:10.308 read: IOPS=324, BW=40.6MiB/s (42.6MB/s)(205MiB/5045msec) 00:34:10.308 slat (nsec): min=6237, max=58134, avg=11081.35, stdev=2203.74 00:34:10.308 clat (usec): min=5500, max=50317, avg=9226.87, stdev=4589.10 00:34:10.308 lat (usec): min=5511, max=50329, avg=9237.95, stdev=4589.24 00:34:10.308 clat percentiles (usec): 00:34:10.308 | 1.00th=[ 6456], 5.00th=[ 7177], 10.00th=[ 7504], 20.00th=[ 7898], 00:34:10.308 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8979], 00:34:10.308 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[10028], 95.00th=[10421], 00:34:10.308 | 99.00th=[47973], 99.50th=[48497], 99.90th=[50070], 99.95th=[50070], 00:34:10.308 | 99.99th=[50070] 00:34:10.308 bw ( KiB/s): min=34048, max=44800, per=34.93%, avg=41856.00, stdev=3662.88, samples=10 00:34:10.308 iops : min= 266, max= 350, avg=327.00, stdev=28.62, samples=10 00:34:10.308 lat (msec) : 10=89.87%, 20=8.85%, 50=1.10%, 100=0.18% 00:34:10.308 cpu : usr=94.03%, sys=5.69%, ctx=11, majf=0, minf=9 00:34:10.308 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.308 issued rwts: total=1638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.308 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:10.308 filename0: (groupid=0, jobs=1): err= 0: pid=3103368: Tue Nov 19 13:26:12 2024 00:34:10.308 read: IOPS=313, BW=39.2MiB/s (41.1MB/s)(196MiB/5004msec) 00:34:10.308 slat (nsec): min=6213, max=26549, avg=11159.72, stdev=1797.77 00:34:10.308 clat (usec): min=3865, max=48954, avg=9553.59, stdev=2704.81 00:34:10.308 lat (usec): min=3877, max=48965, avg=9564.75, stdev=2705.20 00:34:10.308 clat percentiles (usec): 00:34:10.308 | 1.00th=[ 5735], 5.00th=[ 6915], 10.00th=[ 7767], 20.00th=[ 8455], 00:34:10.308 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9765], 00:34:10.309 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11076], 95.00th=[11338], 00:34:10.309 | 99.00th=[12256], 99.50th=[13173], 99.90th=[47973], 99.95th=[49021], 00:34:10.309 | 99.99th=[49021] 00:34:10.309 bw ( KiB/s): min=38912, max=41216, per=33.48%, avg=40115.20, stdev=935.17, samples=10 00:34:10.309 iops : min= 304, max= 322, avg=313.40, stdev= 7.31, samples=10 00:34:10.309 lat (msec) : 4=0.06%, 10=66.09%, 20=33.46%, 50=0.38% 00:34:10.309 cpu : usr=94.44%, sys=5.28%, ctx=9, majf=0, minf=9 00:34:10.309 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.309 issued rwts: total=1569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.309 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:10.309 filename0: (groupid=0, jobs=1): err= 0: pid=3103369: Tue Nov 19 13:26:12 2024 00:34:10.309 read: IOPS=300, BW=37.6MiB/s (39.4MB/s)(190MiB/5045msec) 00:34:10.309 slat (nsec): min=6295, max=95333, avg=11362.29, stdev=2850.30 00:34:10.309 clat (usec): min=3469, max=49557, avg=9942.74, stdev=2787.34 00:34:10.309 lat (usec): min=3476, max=49569, avg=9954.10, stdev=2787.61 00:34:10.309 clat percentiles (usec): 00:34:10.309 | 1.00th=[ 3818], 5.00th=[ 6456], 10.00th=[ 7963], 20.00th=[ 8848], 00:34:10.309 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10421], 00:34:10.309 | 70.00th=[10683], 80.00th=[11207], 90.00th=[11731], 95.00th=[12125], 00:34:10.309 | 99.00th=[12911], 99.50th=[13435], 99.90th=[48497], 99.95th=[49546], 00:34:10.309 | 99.99th=[49546] 00:34:10.309 bw ( KiB/s): min=36608, max=45056, per=32.32%, avg=38732.80, stdev=2928.94, samples=10 00:34:10.309 iops : min= 286, max= 352, avg=302.60, stdev=22.88, samples=10 00:34:10.309 lat (msec) : 4=2.37%, 10=45.25%, 20=52.04%, 50=0.33% 00:34:10.309 cpu : usr=94.05%, sys=5.67%, ctx=9, majf=0, minf=10 00:34:10.309 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.309 issued rwts: total=1516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.309 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:10.309 00:34:10.309 Run status group 0 (all jobs): 00:34:10.309 READ: bw=117MiB/s (123MB/s), 37.6MiB/s-40.6MiB/s (39.4MB/s-42.6MB/s), io=590MiB (619MB), run=5004-5045msec 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.309 bdev_null0 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.309 [2024-11-19 13:26:12.990268] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.309 13:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.309 bdev_null1 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.309 bdev_null2 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:10.309 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:10.310 { 00:34:10.310 "params": { 00:34:10.310 "name": "Nvme$subsystem", 00:34:10.310 "trtype": "$TEST_TRANSPORT", 00:34:10.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.310 "adrfam": "ipv4", 00:34:10.310 "trsvcid": "$NVMF_PORT", 00:34:10.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.310 "hdgst": ${hdgst:-false}, 00:34:10.310 "ddgst": ${ddgst:-false} 00:34:10.310 }, 00:34:10.310 "method": "bdev_nvme_attach_controller" 00:34:10.310 } 00:34:10.310 EOF 00:34:10.310 )") 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:10.310 { 00:34:10.310 "params": { 00:34:10.310 "name": "Nvme$subsystem", 00:34:10.310 "trtype": "$TEST_TRANSPORT", 00:34:10.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.310 "adrfam": "ipv4", 00:34:10.310 "trsvcid": "$NVMF_PORT", 00:34:10.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.310 "hdgst": ${hdgst:-false}, 00:34:10.310 "ddgst": ${ddgst:-false} 00:34:10.310 }, 00:34:10.310 "method": "bdev_nvme_attach_controller" 00:34:10.310 } 00:34:10.310 EOF 00:34:10.310 )") 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:10.310 { 00:34:10.310 "params": { 00:34:10.310 "name": "Nvme$subsystem", 00:34:10.310 "trtype": "$TEST_TRANSPORT", 00:34:10.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.310 "adrfam": "ipv4", 00:34:10.310 "trsvcid": "$NVMF_PORT", 00:34:10.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.310 "hdgst": ${hdgst:-false}, 00:34:10.310 "ddgst": ${ddgst:-false} 00:34:10.310 }, 00:34:10.310 "method": "bdev_nvme_attach_controller" 00:34:10.310 } 00:34:10.310 EOF 00:34:10.310 )") 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:10.310 "params": { 00:34:10.310 "name": "Nvme0", 00:34:10.310 "trtype": "tcp", 00:34:10.310 "traddr": "10.0.0.2", 00:34:10.310 "adrfam": "ipv4", 00:34:10.310 "trsvcid": "4420", 00:34:10.310 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:10.310 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:10.310 "hdgst": false, 00:34:10.310 "ddgst": false 00:34:10.310 }, 00:34:10.310 "method": "bdev_nvme_attach_controller" 00:34:10.310 },{ 00:34:10.310 "params": { 00:34:10.310 "name": "Nvme1", 00:34:10.310 "trtype": "tcp", 00:34:10.310 "traddr": "10.0.0.2", 00:34:10.310 "adrfam": "ipv4", 00:34:10.310 "trsvcid": "4420", 00:34:10.310 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:10.310 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:10.310 "hdgst": false, 00:34:10.310 "ddgst": false 00:34:10.310 }, 00:34:10.310 "method": "bdev_nvme_attach_controller" 00:34:10.310 },{ 00:34:10.310 "params": { 00:34:10.310 "name": "Nvme2", 00:34:10.310 "trtype": "tcp", 00:34:10.310 "traddr": "10.0.0.2", 00:34:10.310 "adrfam": "ipv4", 00:34:10.310 "trsvcid": "4420", 00:34:10.310 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:10.310 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:10.310 "hdgst": false, 00:34:10.310 "ddgst": false 00:34:10.310 }, 00:34:10.310 "method": "bdev_nvme_attach_controller" 00:34:10.310 }' 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:10.310 13:26:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:10.310 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:10.310 ... 00:34:10.310 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:10.310 ... 00:34:10.310 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:10.310 ... 00:34:10.310 fio-3.35 00:34:10.310 Starting 24 threads 00:34:22.522 00:34:22.522 filename0: (groupid=0, jobs=1): err= 0: pid=3104539: Tue Nov 19 13:26:24 2024 00:34:22.522 read: IOPS=563, BW=2256KiB/s (2310kB/s)(22.2MiB/10071msec) 00:34:22.522 slat (nsec): min=6936, max=89215, avg=35201.09, stdev=19155.15 00:34:22.522 clat (usec): min=9143, max=95515, avg=28037.76, stdev=3878.98 00:34:22.522 lat (usec): min=9161, max=95527, avg=28072.96, stdev=3877.96 00:34:22.522 clat percentiles (usec): 00:34:22.522 | 1.00th=[18220], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:22.522 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:34:22.522 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:34:22.522 | 99.00th=[28967], 99.50th=[29230], 99.90th=[95945], 99.95th=[95945], 00:34:22.522 | 99.99th=[95945] 00:34:22.522 bw ( KiB/s): min= 2176, max= 2432, per=4.22%, avg=2265.60, stdev=73.12, samples=20 00:34:22.522 iops : min= 544, max= 608, avg=566.40, stdev=18.28, samples=20 00:34:22.522 lat (msec) : 10=0.04%, 20=1.09%, 50=98.59%, 100=0.28% 00:34:22.522 cpu : usr=98.60%, sys=1.04%, ctx=15, majf=0, minf=9 00:34:22.522 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:22.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.522 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.522 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.522 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.522 filename0: (groupid=0, jobs=1): err= 0: pid=3104540: Tue Nov 19 13:26:24 2024 00:34:22.522 read: IOPS=563, BW=2254KiB/s (2308kB/s)(22.3MiB/10119msec) 00:34:22.522 slat (nsec): min=6997, max=66140, avg=21197.38, stdev=7035.51 00:34:22.522 clat (msec): min=9, max=134, avg=28.21, stdev= 5.78 00:34:22.522 lat (msec): min=9, max=134, avg=28.23, stdev= 5.78 00:34:22.522 clat percentiles (msec): 00:34:22.522 | 1.00th=[ 18], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:34:22.522 | 30.00th=[ 28], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:34:22.522 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:34:22.522 | 99.00th=[ 29], 99.50th=[ 30], 99.90th=[ 132], 99.95th=[ 132], 00:34:22.522 | 99.99th=[ 136] 00:34:22.522 bw ( KiB/s): min= 2176, max= 2608, per=4.24%, avg=2274.40, stdev=99.89, samples=20 00:34:22.522 iops : min= 544, max= 652, avg=568.60, stdev=24.97, samples=20 00:34:22.522 lat (msec) : 10=0.04%, 20=1.37%, 50=98.32%, 250=0.28% 00:34:22.522 cpu : usr=98.50%, sys=1.14%, ctx=13, majf=0, minf=9 00:34:22.522 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:22.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.522 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.522 issued rwts: total=5702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.522 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.522 filename0: (groupid=0, jobs=1): err= 0: pid=3104541: Tue Nov 19 13:26:24 2024 00:34:22.522 read: IOPS=558, BW=2234KiB/s (2287kB/s)(22.0MiB/10085msec) 00:34:22.522 slat (nsec): min=4543, max=43980, avg=20293.88, stdev=6349.45 00:34:22.522 clat (msec): min=27, max=127, avg=28.45, stdev= 5.23 00:34:22.522 lat (msec): min=27, max=127, avg=28.47, stdev= 5.23 00:34:22.522 clat percentiles (msec): 00:34:22.522 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:34:22.522 | 30.00th=[ 28], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:34:22.522 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:34:22.522 | 99.00th=[ 30], 99.50th=[ 42], 99.90th=[ 126], 99.95th=[ 126], 00:34:22.522 | 99.99th=[ 128] 00:34:22.522 bw ( KiB/s): min= 2167, max= 2304, per=4.19%, avg=2246.15, stdev=65.65, samples=20 00:34:22.522 iops : min= 541, max= 576, avg=561.50, stdev=16.46, samples=20 00:34:22.522 lat (msec) : 50=99.72%, 250=0.28% 00:34:22.522 cpu : usr=98.62%, sys=1.01%, ctx=15, majf=0, minf=9 00:34:22.523 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:22.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.523 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.523 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.523 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.523 filename0: (groupid=0, jobs=1): err= 0: pid=3104542: Tue Nov 19 13:26:24 2024 00:34:22.523 read: IOPS=559, BW=2239KiB/s (2293kB/s)(21.9MiB/10034msec) 00:34:22.523 slat (nsec): min=7737, max=88990, avg=38641.49, stdev=19076.95 00:34:22.523 clat (usec): min=19680, max=95621, avg=28211.03, stdev=3785.64 00:34:22.523 lat (usec): min=19690, max=95658, avg=28249.67, stdev=3785.48 00:34:22.523 clat percentiles (usec): 00:34:22.523 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:22.523 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.523 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:34:22.523 | 99.00th=[28967], 99.50th=[49546], 99.90th=[94897], 99.95th=[95945], 00:34:22.523 | 99.99th=[95945] 00:34:22.523 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2240.00, stdev=88.10, samples=20 00:34:22.523 iops : min= 512, max= 576, avg=560.00, stdev=22.02, samples=20 00:34:22.523 lat (msec) : 20=0.04%, 50=99.68%, 100=0.28% 00:34:22.523 cpu : usr=98.62%, sys=1.01%, ctx=15, majf=0, minf=9 00:34:22.523 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:22.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.523 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.523 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.523 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.523 filename0: (groupid=0, jobs=1): err= 0: pid=3104543: Tue Nov 19 13:26:24 2024 00:34:22.523 read: IOPS=558, BW=2233KiB/s (2287kB/s)(22.0MiB/10088msec) 00:34:22.523 slat (nsec): min=6230, max=45370, avg=21955.51, stdev=6357.71 00:34:22.523 clat (msec): min=27, max=132, avg=28.46, stdev= 5.56 00:34:22.523 lat (msec): min=27, max=132, avg=28.48, stdev= 5.56 00:34:22.523 clat percentiles (msec): 00:34:22.523 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:34:22.523 | 30.00th=[ 28], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:34:22.523 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:34:22.523 | 99.00th=[ 30], 99.50th=[ 40], 99.90th=[ 132], 99.95th=[ 132], 00:34:22.523 | 99.99th=[ 132] 00:34:22.523 bw ( KiB/s): min= 2052, max= 2304, per=4.19%, avg=2246.60, stdev=76.88, samples=20 00:34:22.523 iops : min= 513, max= 576, avg=561.65, stdev=19.22, samples=20 00:34:22.523 lat (msec) : 50=99.72%, 250=0.28% 00:34:22.523 cpu : usr=98.71%, sys=0.93%, ctx=11, majf=0, minf=9 00:34:22.523 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:22.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.523 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.523 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.523 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.523 filename0: (groupid=0, jobs=1): err= 0: pid=3104544: Tue Nov 19 13:26:24 2024 00:34:22.523 read: IOPS=559, BW=2239KiB/s (2293kB/s)(21.9MiB/10034msec) 00:34:22.523 slat (nsec): min=7237, max=91540, avg=34050.98, stdev=19099.10 00:34:22.523 clat (usec): min=27100, max=95797, avg=28240.08, stdev=3783.66 00:34:22.523 lat (usec): min=27115, max=95831, avg=28274.13, stdev=3783.73 00:34:22.523 clat percentiles (usec): 00:34:22.523 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:22.523 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.523 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:34:22.523 | 99.00th=[28967], 99.50th=[49546], 99.90th=[95945], 99.95th=[95945], 00:34:22.523 | 99.99th=[95945] 00:34:22.523 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2240.00, stdev=88.10, samples=20 00:34:22.523 iops : min= 512, max= 576, avg=560.00, stdev=22.02, samples=20 00:34:22.523 lat (msec) : 50=99.72%, 100=0.28% 00:34:22.523 cpu : usr=98.66%, sys=0.97%, ctx=11, majf=0, minf=9 00:34:22.523 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:22.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.523 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.523 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.523 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.523 filename0: (groupid=0, jobs=1): err= 0: pid=3104545: Tue Nov 19 13:26:24 2024 00:34:22.523 read: IOPS=563, BW=2256KiB/s (2310kB/s)(22.2MiB/10071msec) 00:34:22.523 slat (nsec): min=7248, max=74231, avg=25003.79, stdev=13279.49 00:34:22.523 clat (usec): min=11420, max=95119, avg=28167.94, stdev=3849.39 00:34:22.523 lat (usec): min=11482, max=95148, avg=28192.94, stdev=3849.11 00:34:22.523 clat percentiles (usec): 00:34:22.523 | 1.00th=[18220], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:34:22.523 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:34:22.523 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:22.523 | 99.00th=[28967], 99.50th=[29492], 99.90th=[94897], 99.95th=[94897], 00:34:22.523 | 99.99th=[94897] 00:34:22.523 bw ( KiB/s): min= 2176, max= 2432, per=4.22%, avg=2265.60, stdev=73.12, samples=20 00:34:22.523 iops : min= 544, max= 608, avg=566.40, stdev=18.28, samples=20 00:34:22.523 lat (msec) : 20=1.13%, 50=98.59%, 100=0.28% 00:34:22.523 cpu : usr=98.59%, sys=0.96%, ctx=66, majf=0, minf=9 00:34:22.523 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:22.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.523 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.523 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.523 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.523 filename0: (groupid=0, jobs=1): err= 0: pid=3104546: Tue Nov 19 13:26:24 2024 00:34:22.523 read: IOPS=563, BW=2256KiB/s (2310kB/s)(22.2MiB/10072msec) 00:34:22.523 slat (nsec): min=7612, max=92790, avg=35471.98, stdev=18991.86 00:34:22.523 clat (usec): min=11341, max=95602, avg=28021.91, stdev=3868.81 00:34:22.523 lat (usec): min=11370, max=95644, avg=28057.38, stdev=3869.50 00:34:22.523 clat percentiles (usec): 00:34:22.523 | 1.00th=[18220], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:22.523 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.523 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:34:22.523 | 99.00th=[28967], 99.50th=[29230], 99.90th=[94897], 99.95th=[95945], 00:34:22.523 | 99.99th=[95945] 00:34:22.523 bw ( KiB/s): min= 2176, max= 2432, per=4.22%, avg=2265.60, stdev=73.12, samples=20 00:34:22.523 iops : min= 544, max= 608, avg=566.40, stdev=18.28, samples=20 00:34:22.523 lat (msec) : 20=1.13%, 50=98.59%, 100=0.28% 00:34:22.523 cpu : usr=98.51%, sys=1.12%, ctx=16, majf=0, minf=9 00:34:22.523 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:22.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.523 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.523 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.523 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.523 filename1: (groupid=0, jobs=1): err= 0: pid=3104547: Tue Nov 19 13:26:24 2024 00:34:22.523 read: IOPS=562, BW=2252KiB/s (2306kB/s)(22.2MiB/10119msec) 00:34:22.523 slat (nsec): min=7352, max=77008, avg=18245.91, stdev=6943.46 00:34:22.523 clat (msec): min=11, max=125, avg=28.27, stdev= 5.40 00:34:22.523 lat (msec): min=11, max=125, avg=28.29, stdev= 5.40 00:34:22.523 clat percentiles (msec): 00:34:22.523 | 1.00th=[ 15], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:34:22.523 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:34:22.523 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:34:22.523 | 99.00th=[ 30], 99.50th=[ 30], 99.90th=[ 126], 99.95th=[ 126], 00:34:22.523 | 99.99th=[ 126] 00:34:22.523 bw ( KiB/s): min= 2176, max= 2432, per=4.24%, avg=2272.00, stdev=70.42, samples=20 00:34:22.523 iops : min= 544, max= 608, avg=568.00, stdev=17.60, samples=20 00:34:22.523 lat (msec) : 20=1.12%, 50=98.60%, 250=0.28% 00:34:22.523 cpu : usr=98.43%, sys=1.21%, ctx=11, majf=0, minf=9 00:34:22.523 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:22.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.523 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.523 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.523 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.523 filename1: (groupid=0, jobs=1): err= 0: pid=3104548: Tue Nov 19 13:26:24 2024 00:34:22.523 read: IOPS=562, BW=2252KiB/s (2306kB/s)(22.2MiB/10119msec) 00:34:22.523 slat (nsec): min=7687, max=47577, avg=20370.30, stdev=5535.12 00:34:22.523 clat (msec): min=9, max=125, avg=28.25, stdev= 5.40 00:34:22.523 lat (msec): min=9, max=125, avg=28.27, stdev= 5.40 00:34:22.523 clat percentiles (msec): 00:34:22.523 | 1.00th=[ 15], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:34:22.523 | 30.00th=[ 28], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:34:22.523 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:34:22.523 | 99.00th=[ 30], 99.50th=[ 30], 99.90th=[ 126], 99.95th=[ 126], 00:34:22.523 | 99.99th=[ 126] 00:34:22.523 bw ( KiB/s): min= 2176, max= 2432, per=4.24%, avg=2272.00, stdev=70.42, samples=20 00:34:22.523 iops : min= 544, max= 608, avg=568.00, stdev=17.60, samples=20 00:34:22.523 lat (msec) : 10=0.04%, 20=1.09%, 50=98.60%, 250=0.28% 00:34:22.523 cpu : usr=98.45%, sys=1.20%, ctx=14, majf=0, minf=9 00:34:22.523 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:22.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.523 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.523 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.523 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.524 filename1: (groupid=0, jobs=1): err= 0: pid=3104549: Tue Nov 19 13:26:24 2024 00:34:22.524 read: IOPS=562, BW=2252KiB/s (2306kB/s)(22.2MiB/10119msec) 00:34:22.524 slat (nsec): min=7326, max=62267, avg=21000.12, stdev=5862.04 00:34:22.524 clat (msec): min=8, max=125, avg=28.24, stdev= 5.40 00:34:22.524 lat (msec): min=8, max=125, avg=28.26, stdev= 5.40 00:34:22.524 clat percentiles (msec): 00:34:22.524 | 1.00th=[ 14], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:34:22.524 | 30.00th=[ 28], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:34:22.524 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:34:22.524 | 99.00th=[ 30], 99.50th=[ 30], 99.90th=[ 126], 99.95th=[ 126], 00:34:22.524 | 99.99th=[ 126] 00:34:22.524 bw ( KiB/s): min= 2176, max= 2432, per=4.24%, avg=2272.00, stdev=70.42, samples=20 00:34:22.524 iops : min= 544, max= 608, avg=568.00, stdev=17.60, samples=20 00:34:22.524 lat (msec) : 10=0.04%, 20=1.05%, 50=98.63%, 250=0.28% 00:34:22.524 cpu : usr=98.49%, sys=1.16%, ctx=14, majf=0, minf=9 00:34:22.524 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:22.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.524 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.524 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.524 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.524 filename1: (groupid=0, jobs=1): err= 0: pid=3104550: Tue Nov 19 13:26:24 2024 00:34:22.524 read: IOPS=561, BW=2247KiB/s (2301kB/s)(22.1MiB/10056msec) 00:34:22.524 slat (nsec): min=7195, max=91714, avg=34097.95, stdev=18957.49 00:34:22.524 clat (usec): min=18144, max=95726, avg=28141.83, stdev=3638.38 00:34:22.524 lat (usec): min=18151, max=95749, avg=28175.93, stdev=3639.07 00:34:22.524 clat percentiles (usec): 00:34:22.524 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:22.524 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.524 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:34:22.524 | 99.00th=[28967], 99.50th=[29230], 99.90th=[95945], 99.95th=[95945], 00:34:22.524 | 99.99th=[95945] 00:34:22.524 bw ( KiB/s): min= 2129, max= 2304, per=4.20%, avg=2250.45, stdev=68.04, samples=20 00:34:22.524 iops : min= 532, max= 576, avg=562.60, stdev=17.03, samples=20 00:34:22.524 lat (msec) : 20=0.28%, 50=99.43%, 100=0.28% 00:34:22.524 cpu : usr=98.60%, sys=1.03%, ctx=20, majf=0, minf=9 00:34:22.524 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:22.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.524 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.524 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.524 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.524 filename1: (groupid=0, jobs=1): err= 0: pid=3104551: Tue Nov 19 13:26:24 2024 00:34:22.524 read: IOPS=558, BW=2234KiB/s (2287kB/s)(22.0MiB/10086msec) 00:34:22.524 slat (nsec): min=4079, max=51501, avg=22227.48, stdev=6371.57 00:34:22.524 clat (msec): min=27, max=132, avg=28.45, stdev= 5.55 00:34:22.524 lat (msec): min=27, max=132, avg=28.47, stdev= 5.55 00:34:22.524 clat percentiles (msec): 00:34:22.524 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:34:22.524 | 30.00th=[ 28], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:34:22.524 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:34:22.524 | 99.00th=[ 30], 99.50th=[ 37], 99.90th=[ 132], 99.95th=[ 132], 00:34:22.524 | 99.99th=[ 132] 00:34:22.524 bw ( KiB/s): min= 2015, max= 2308, per=4.18%, avg=2244.95, stdev=82.23, samples=20 00:34:22.524 iops : min= 503, max= 577, avg=561.20, stdev=20.67, samples=20 00:34:22.524 lat (msec) : 50=99.72%, 250=0.28% 00:34:22.524 cpu : usr=98.54%, sys=1.09%, ctx=20, majf=0, minf=9 00:34:22.524 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:22.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.524 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.524 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.524 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.524 filename1: (groupid=0, jobs=1): err= 0: pid=3104552: Tue Nov 19 13:26:24 2024 00:34:22.524 read: IOPS=559, BW=2239KiB/s (2293kB/s)(21.9MiB/10034msec) 00:34:22.524 slat (nsec): min=6934, max=91284, avg=33883.60, stdev=19033.12 00:34:22.524 clat (usec): min=27139, max=95877, avg=28239.85, stdev=3794.39 00:34:22.524 lat (usec): min=27165, max=95912, avg=28273.74, stdev=3794.51 00:34:22.524 clat percentiles (usec): 00:34:22.524 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:22.524 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.524 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:34:22.524 | 99.00th=[28967], 99.50th=[50070], 99.90th=[95945], 99.95th=[95945], 00:34:22.524 | 99.99th=[95945] 00:34:22.524 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2240.00, stdev=88.10, samples=20 00:34:22.524 iops : min= 512, max= 576, avg=560.00, stdev=22.02, samples=20 00:34:22.524 lat (msec) : 50=99.47%, 100=0.53% 00:34:22.524 cpu : usr=98.52%, sys=1.12%, ctx=13, majf=0, minf=9 00:34:22.524 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:22.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.524 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.524 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.524 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.524 filename1: (groupid=0, jobs=1): err= 0: pid=3104553: Tue Nov 19 13:26:24 2024 00:34:22.524 read: IOPS=558, BW=2233KiB/s (2287kB/s)(22.0MiB/10088msec) 00:34:22.524 slat (nsec): min=5964, max=44761, avg=22407.24, stdev=6439.00 00:34:22.524 clat (msec): min=27, max=131, avg=28.46, stdev= 5.55 00:34:22.524 lat (msec): min=27, max=131, avg=28.48, stdev= 5.55 00:34:22.524 clat percentiles (msec): 00:34:22.524 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:34:22.524 | 30.00th=[ 28], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:34:22.524 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:34:22.524 | 99.00th=[ 30], 99.50th=[ 40], 99.90th=[ 132], 99.95th=[ 132], 00:34:22.524 | 99.99th=[ 132] 00:34:22.524 bw ( KiB/s): min= 2052, max= 2304, per=4.19%, avg=2246.80, stdev=76.69, samples=20 00:34:22.524 iops : min= 513, max= 576, avg=561.70, stdev=19.17, samples=20 00:34:22.524 lat (msec) : 50=99.72%, 250=0.28% 00:34:22.524 cpu : usr=98.85%, sys=0.78%, ctx=14, majf=0, minf=9 00:34:22.524 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:22.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.524 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.524 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.524 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.524 filename1: (groupid=0, jobs=1): err= 0: pid=3104554: Tue Nov 19 13:26:24 2024 00:34:22.524 read: IOPS=563, BW=2255KiB/s (2309kB/s)(22.2MiB/10082msec) 00:34:22.524 slat (nsec): min=4309, max=91918, avg=17167.16, stdev=14872.23 00:34:22.524 clat (msec): min=22, max=101, avg=28.26, stdev= 4.86 00:34:22.524 lat (msec): min=22, max=101, avg=28.28, stdev= 4.86 00:34:22.524 clat percentiles (msec): 00:34:22.524 | 1.00th=[ 23], 5.00th=[ 23], 10.00th=[ 24], 20.00th=[ 28], 00:34:22.524 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:34:22.524 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 33], 95.00th=[ 34], 00:34:22.524 | 99.00th=[ 35], 99.50th=[ 51], 99.90th=[ 102], 99.95th=[ 102], 00:34:22.524 | 99.99th=[ 102] 00:34:22.524 bw ( KiB/s): min= 2112, max= 2336, per=4.23%, avg=2267.20, stdev=55.93, samples=20 00:34:22.524 iops : min= 528, max= 584, avg=566.80, stdev=13.98, samples=20 00:34:22.524 lat (msec) : 50=99.68%, 100=0.14%, 250=0.18% 00:34:22.524 cpu : usr=98.75%, sys=0.88%, ctx=13, majf=0, minf=9 00:34:22.524 IO depths : 1=0.1%, 2=0.1%, 4=1.9%, 8=81.2%, 16=16.8%, 32=0.0%, >=64=0.0% 00:34:22.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.524 complete : 0=0.0%, 4=89.0%, 8=9.5%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.524 issued rwts: total=5684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.524 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.524 filename2: (groupid=0, jobs=1): err= 0: pid=3104555: Tue Nov 19 13:26:24 2024 00:34:22.524 read: IOPS=558, BW=2234KiB/s (2288kB/s)(22.0MiB/10083msec) 00:34:22.524 slat (nsec): min=4221, max=43512, avg=20769.11, stdev=6167.10 00:34:22.524 clat (msec): min=27, max=125, avg=28.45, stdev= 5.22 00:34:22.524 lat (msec): min=27, max=125, avg=28.47, stdev= 5.22 00:34:22.524 clat percentiles (msec): 00:34:22.524 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:34:22.524 | 30.00th=[ 28], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:34:22.524 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:34:22.524 | 99.00th=[ 30], 99.50th=[ 43], 99.90th=[ 126], 99.95th=[ 126], 00:34:22.524 | 99.99th=[ 126] 00:34:22.524 bw ( KiB/s): min= 2167, max= 2304, per=4.19%, avg=2245.95, stdev=65.87, samples=20 00:34:22.524 iops : min= 541, max= 576, avg=561.45, stdev=16.52, samples=20 00:34:22.524 lat (msec) : 50=99.72%, 250=0.28% 00:34:22.524 cpu : usr=98.73%, sys=0.92%, ctx=5, majf=0, minf=9 00:34:22.524 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:22.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.524 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.524 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.524 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.524 filename2: (groupid=0, jobs=1): err= 0: pid=3104556: Tue Nov 19 13:26:24 2024 00:34:22.524 read: IOPS=559, BW=2237KiB/s (2291kB/s)(22.1MiB/10098msec) 00:34:22.524 slat (nsec): min=7178, max=44289, avg=18718.49, stdev=7179.97 00:34:22.524 clat (msec): min=20, max=131, avg=28.46, stdev= 5.52 00:34:22.524 lat (msec): min=20, max=131, avg=28.48, stdev= 5.52 00:34:22.524 clat percentiles (msec): 00:34:22.525 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:34:22.525 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:34:22.525 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:34:22.525 | 99.00th=[ 30], 99.50th=[ 30], 99.90th=[ 132], 99.95th=[ 132], 00:34:22.525 | 99.99th=[ 132] 00:34:22.525 bw ( KiB/s): min= 2141, max= 2304, per=4.20%, avg=2251.05, stdev=66.96, samples=20 00:34:22.525 iops : min= 535, max= 576, avg=562.75, stdev=16.76, samples=20 00:34:22.525 lat (msec) : 50=99.72%, 250=0.28% 00:34:22.525 cpu : usr=98.50%, sys=1.09%, ctx=14, majf=0, minf=9 00:34:22.525 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:22.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.525 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.525 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.525 filename2: (groupid=0, jobs=1): err= 0: pid=3104557: Tue Nov 19 13:26:24 2024 00:34:22.525 read: IOPS=558, BW=2234KiB/s (2288kB/s)(22.0MiB/10082msec) 00:34:22.525 slat (nsec): min=4548, max=55891, avg=21041.80, stdev=6214.47 00:34:22.525 clat (msec): min=20, max=125, avg=28.45, stdev= 5.23 00:34:22.525 lat (msec): min=20, max=125, avg=28.47, stdev= 5.23 00:34:22.525 clat percentiles (msec): 00:34:22.525 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:34:22.525 | 30.00th=[ 28], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:34:22.525 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:34:22.525 | 99.00th=[ 30], 99.50th=[ 42], 99.90th=[ 126], 99.95th=[ 126], 00:34:22.525 | 99.99th=[ 126] 00:34:22.525 bw ( KiB/s): min= 2167, max= 2304, per=4.19%, avg=2246.15, stdev=65.65, samples=20 00:34:22.525 iops : min= 541, max= 576, avg=561.50, stdev=16.46, samples=20 00:34:22.525 lat (msec) : 50=99.72%, 250=0.28% 00:34:22.525 cpu : usr=98.33%, sys=1.31%, ctx=13, majf=0, minf=9 00:34:22.525 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:22.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.525 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.525 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.525 filename2: (groupid=0, jobs=1): err= 0: pid=3104558: Tue Nov 19 13:26:24 2024 00:34:22.525 read: IOPS=565, BW=2262KiB/s (2316kB/s)(22.1MiB/10016msec) 00:34:22.525 slat (nsec): min=7217, max=42111, avg=17187.01, stdev=6727.19 00:34:22.525 clat (usec): min=11536, max=47912, avg=28152.28, stdev=1581.21 00:34:22.525 lat (usec): min=11544, max=47939, avg=28169.47, stdev=1580.94 00:34:22.525 clat percentiles (usec): 00:34:22.525 | 1.00th=[27657], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:34:22.525 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:34:22.525 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:34:22.525 | 99.00th=[29230], 99.50th=[29754], 99.90th=[47973], 99.95th=[47973], 00:34:22.525 | 99.99th=[47973] 00:34:22.525 bw ( KiB/s): min= 2176, max= 2304, per=4.21%, avg=2259.20, stdev=62.64, samples=20 00:34:22.525 iops : min= 544, max= 576, avg=564.80, stdev=15.66, samples=20 00:34:22.525 lat (msec) : 20=0.56%, 50=99.44% 00:34:22.525 cpu : usr=98.63%, sys=0.99%, ctx=23, majf=0, minf=9 00:34:22.525 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:22.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.525 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.525 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.525 filename2: (groupid=0, jobs=1): err= 0: pid=3104559: Tue Nov 19 13:26:24 2024 00:34:22.525 read: IOPS=558, BW=2233KiB/s (2287kB/s)(22.0MiB/10088msec) 00:34:22.525 slat (nsec): min=6045, max=49836, avg=22059.81, stdev=6388.50 00:34:22.525 clat (msec): min=18, max=131, avg=28.47, stdev= 5.57 00:34:22.525 lat (msec): min=18, max=132, avg=28.49, stdev= 5.57 00:34:22.525 clat percentiles (msec): 00:34:22.525 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:34:22.525 | 30.00th=[ 28], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:34:22.525 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:34:22.525 | 99.00th=[ 30], 99.50th=[ 40], 99.90th=[ 132], 99.95th=[ 132], 00:34:22.525 | 99.99th=[ 132] 00:34:22.525 bw ( KiB/s): min= 2052, max= 2304, per=4.19%, avg=2246.80, stdev=76.69, samples=20 00:34:22.525 iops : min= 513, max= 576, avg=561.70, stdev=19.17, samples=20 00:34:22.525 lat (msec) : 20=0.04%, 50=99.68%, 250=0.28% 00:34:22.525 cpu : usr=98.61%, sys=1.03%, ctx=7, majf=0, minf=9 00:34:22.525 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:22.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.525 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.525 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.525 filename2: (groupid=0, jobs=1): err= 0: pid=3104560: Tue Nov 19 13:26:24 2024 00:34:22.525 read: IOPS=559, BW=2239KiB/s (2293kB/s)(21.9MiB/10034msec) 00:34:22.525 slat (nsec): min=7281, max=91690, avg=33454.58, stdev=19176.65 00:34:22.525 clat (usec): min=27142, max=95883, avg=28239.82, stdev=3787.52 00:34:22.525 lat (usec): min=27166, max=95913, avg=28273.28, stdev=3787.67 00:34:22.525 clat percentiles (usec): 00:34:22.525 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:22.525 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:22.525 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:34:22.525 | 99.00th=[28967], 99.50th=[49546], 99.90th=[95945], 99.95th=[95945], 00:34:22.525 | 99.99th=[95945] 00:34:22.525 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2240.00, stdev=88.10, samples=20 00:34:22.525 iops : min= 512, max= 576, avg=560.00, stdev=22.02, samples=20 00:34:22.525 lat (msec) : 50=99.72%, 100=0.28% 00:34:22.525 cpu : usr=98.66%, sys=0.97%, ctx=17, majf=0, minf=9 00:34:22.525 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:22.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.525 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.525 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.525 filename2: (groupid=0, jobs=1): err= 0: pid=3104561: Tue Nov 19 13:26:24 2024 00:34:22.525 read: IOPS=563, BW=2252KiB/s (2306kB/s)(22.2MiB/10116msec) 00:34:22.525 slat (nsec): min=6498, max=41479, avg=10511.54, stdev=3834.18 00:34:22.525 clat (msec): min=7, max=131, avg=28.32, stdev= 5.76 00:34:22.525 lat (msec): min=7, max=131, avg=28.33, stdev= 5.76 00:34:22.525 clat percentiles (msec): 00:34:22.525 | 1.00th=[ 15], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 29], 00:34:22.525 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:34:22.525 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:34:22.525 | 99.00th=[ 30], 99.50th=[ 30], 99.90th=[ 132], 99.95th=[ 132], 00:34:22.525 | 99.99th=[ 132] 00:34:22.525 bw ( KiB/s): min= 2176, max= 2560, per=4.24%, avg=2272.00, stdev=91.69, samples=20 00:34:22.525 iops : min= 544, max= 640, avg=568.00, stdev=22.92, samples=20 00:34:22.525 lat (msec) : 10=0.37%, 20=0.97%, 50=98.38%, 250=0.28% 00:34:22.525 cpu : usr=98.26%, sys=1.38%, ctx=16, majf=0, minf=9 00:34:22.525 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:22.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.525 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.525 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.525 filename2: (groupid=0, jobs=1): err= 0: pid=3104562: Tue Nov 19 13:26:24 2024 00:34:22.525 read: IOPS=557, BW=2228KiB/s (2282kB/s)(21.9MiB/10081msec) 00:34:22.525 slat (nsec): min=6840, max=72061, avg=24595.07, stdev=11610.63 00:34:22.525 clat (msec): min=26, max=134, avg=28.48, stdev= 5.79 00:34:22.525 lat (msec): min=26, max=134, avg=28.51, stdev= 5.79 00:34:22.525 clat percentiles (msec): 00:34:22.525 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:34:22.525 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 29], 60.00th=[ 29], 00:34:22.525 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:34:22.525 | 99.00th=[ 30], 99.50th=[ 59], 99.90th=[ 132], 99.95th=[ 132], 00:34:22.525 | 99.99th=[ 136] 00:34:22.525 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2240.00, stdev=88.10, samples=20 00:34:22.525 iops : min= 512, max= 576, avg=560.00, stdev=22.02, samples=20 00:34:22.525 lat (msec) : 50=99.43%, 100=0.28%, 250=0.28% 00:34:22.525 cpu : usr=98.58%, sys=1.04%, ctx=6, majf=0, minf=9 00:34:22.525 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:22.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.525 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.525 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:22.525 00:34:22.525 Run status group 0 (all jobs): 00:34:22.525 READ: bw=52.4MiB/s (54.9MB/s), 2228KiB/s-2262KiB/s (2282kB/s-2316kB/s), io=530MiB (556MB), run=10016-10119msec 00:34:22.525 13:26:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:22.525 13:26:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:22.525 13:26:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:22.525 13:26:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:22.525 13:26:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:22.525 13:26:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:22.525 13:26:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.525 13:26:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.525 13:26:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.525 13:26:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:22.525 13:26:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.526 bdev_null0 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.526 [2024-11-19 13:26:25.080044] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.526 bdev_null1 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:22.526 { 00:34:22.526 "params": { 00:34:22.526 "name": "Nvme$subsystem", 00:34:22.526 "trtype": "$TEST_TRANSPORT", 00:34:22.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:22.526 "adrfam": "ipv4", 00:34:22.526 "trsvcid": "$NVMF_PORT", 00:34:22.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:22.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:22.526 "hdgst": ${hdgst:-false}, 00:34:22.526 "ddgst": ${ddgst:-false} 00:34:22.526 }, 00:34:22.526 "method": "bdev_nvme_attach_controller" 00:34:22.526 } 00:34:22.526 EOF 00:34:22.526 )") 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:22.526 { 00:34:22.526 "params": { 00:34:22.526 "name": "Nvme$subsystem", 00:34:22.526 "trtype": "$TEST_TRANSPORT", 00:34:22.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:22.526 "adrfam": "ipv4", 00:34:22.526 "trsvcid": "$NVMF_PORT", 00:34:22.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:22.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:22.526 "hdgst": ${hdgst:-false}, 00:34:22.526 "ddgst": ${ddgst:-false} 00:34:22.526 }, 00:34:22.526 "method": "bdev_nvme_attach_controller" 00:34:22.526 } 00:34:22.526 EOF 00:34:22.526 )") 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:22.526 13:26:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:22.527 13:26:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:22.527 13:26:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:22.527 "params": { 00:34:22.527 "name": "Nvme0", 00:34:22.527 "trtype": "tcp", 00:34:22.527 "traddr": "10.0.0.2", 00:34:22.527 "adrfam": "ipv4", 00:34:22.527 "trsvcid": "4420", 00:34:22.527 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:22.527 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:22.527 "hdgst": false, 00:34:22.527 "ddgst": false 00:34:22.527 }, 00:34:22.527 "method": "bdev_nvme_attach_controller" 00:34:22.527 },{ 00:34:22.527 "params": { 00:34:22.527 "name": "Nvme1", 00:34:22.527 "trtype": "tcp", 00:34:22.527 "traddr": "10.0.0.2", 00:34:22.527 "adrfam": "ipv4", 00:34:22.527 "trsvcid": "4420", 00:34:22.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:22.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:22.527 "hdgst": false, 00:34:22.527 "ddgst": false 00:34:22.527 }, 00:34:22.527 "method": "bdev_nvme_attach_controller" 00:34:22.527 }' 00:34:22.527 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:22.527 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:22.527 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:22.527 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:22.527 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:22.527 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:22.527 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:22.527 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:22.527 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:22.527 13:26:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:22.527 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:22.527 ... 00:34:22.527 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:22.527 ... 00:34:22.527 fio-3.35 00:34:22.527 Starting 4 threads 00:34:29.102 00:34:29.102 filename0: (groupid=0, jobs=1): err= 0: pid=3106505: Tue Nov 19 13:26:31 2024 00:34:29.102 read: IOPS=2758, BW=21.5MiB/s (22.6MB/s)(108MiB/5003msec) 00:34:29.102 slat (nsec): min=6105, max=40053, avg=8853.69, stdev=3027.35 00:34:29.102 clat (usec): min=566, max=5597, avg=2873.65, stdev=409.65 00:34:29.102 lat (usec): min=580, max=5603, avg=2882.51, stdev=409.58 00:34:29.102 clat percentiles (usec): 00:34:29.102 | 1.00th=[ 1795], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2540], 00:34:29.102 | 30.00th=[ 2704], 40.00th=[ 2802], 50.00th=[ 2966], 60.00th=[ 2999], 00:34:29.102 | 70.00th=[ 3032], 80.00th=[ 3097], 90.00th=[ 3261], 95.00th=[ 3490], 00:34:29.102 | 99.00th=[ 4113], 99.50th=[ 4293], 99.90th=[ 4948], 99.95th=[ 5080], 00:34:29.102 | 99.99th=[ 5604] 00:34:29.102 bw ( KiB/s): min=20944, max=23984, per=26.47%, avg=22090.67, stdev=868.25, samples=9 00:34:29.102 iops : min= 2618, max= 2998, avg=2761.33, stdev=108.53, samples=9 00:34:29.102 lat (usec) : 750=0.02%, 1000=0.03% 00:34:29.102 lat (msec) : 2=1.59%, 4=96.89%, 10=1.47% 00:34:29.102 cpu : usr=95.60%, sys=4.08%, ctx=8, majf=0, minf=9 00:34:29.102 IO depths : 1=0.4%, 2=6.6%, 4=64.5%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.102 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.102 issued rwts: total=13799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.102 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:29.102 filename0: (groupid=0, jobs=1): err= 0: pid=3106506: Tue Nov 19 13:26:31 2024 00:34:29.102 read: IOPS=2541, BW=19.9MiB/s (20.8MB/s)(99.3MiB/5002msec) 00:34:29.102 slat (nsec): min=6111, max=56047, avg=8995.13, stdev=3286.52 00:34:29.102 clat (usec): min=811, max=6524, avg=3120.91, stdev=481.28 00:34:29.102 lat (usec): min=822, max=6530, avg=3129.90, stdev=480.99 00:34:29.102 clat percentiles (usec): 00:34:29.102 | 1.00th=[ 2057], 5.00th=[ 2474], 10.00th=[ 2638], 20.00th=[ 2868], 00:34:29.102 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3032], 60.00th=[ 3064], 00:34:29.102 | 70.00th=[ 3163], 80.00th=[ 3326], 90.00th=[ 3687], 95.00th=[ 4015], 00:34:29.102 | 99.00th=[ 4883], 99.50th=[ 5145], 99.90th=[ 5407], 99.95th=[ 5604], 00:34:29.102 | 99.99th=[ 6521] 00:34:29.102 bw ( KiB/s): min=19632, max=20944, per=24.35%, avg=20321.78, stdev=452.65, samples=9 00:34:29.102 iops : min= 2454, max= 2618, avg=2540.22, stdev=56.58, samples=9 00:34:29.102 lat (usec) : 1000=0.02% 00:34:29.102 lat (msec) : 2=0.71%, 4=94.15%, 10=5.11% 00:34:29.102 cpu : usr=95.76%, sys=3.92%, ctx=10, majf=0, minf=9 00:34:29.102 IO depths : 1=0.2%, 2=4.1%, 4=67.5%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.102 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.102 issued rwts: total=12711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.102 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:29.102 filename1: (groupid=0, jobs=1): err= 0: pid=3106507: Tue Nov 19 13:26:31 2024 00:34:29.102 read: IOPS=2651, BW=20.7MiB/s (21.7MB/s)(104MiB/5001msec) 00:34:29.102 slat (nsec): min=6124, max=42692, avg=9066.80, stdev=3274.86 00:34:29.102 clat (usec): min=849, max=5604, avg=2990.37, stdev=424.69 00:34:29.102 lat (usec): min=858, max=5616, avg=2999.44, stdev=424.69 00:34:29.103 clat percentiles (usec): 00:34:29.103 | 1.00th=[ 2024], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2704], 00:34:29.103 | 30.00th=[ 2835], 40.00th=[ 2966], 50.00th=[ 3032], 60.00th=[ 3032], 00:34:29.103 | 70.00th=[ 3064], 80.00th=[ 3163], 90.00th=[ 3392], 95.00th=[ 3720], 00:34:29.103 | 99.00th=[ 4490], 99.50th=[ 4883], 99.90th=[ 5342], 99.95th=[ 5342], 00:34:29.103 | 99.99th=[ 5407] 00:34:29.103 bw ( KiB/s): min=20864, max=21808, per=25.46%, avg=21249.78, stdev=292.02, samples=9 00:34:29.103 iops : min= 2608, max= 2726, avg=2656.22, stdev=36.50, samples=9 00:34:29.103 lat (usec) : 1000=0.01% 00:34:29.103 lat (msec) : 2=0.79%, 4=96.35%, 10=2.85% 00:34:29.103 cpu : usr=95.68%, sys=4.00%, ctx=7, majf=0, minf=9 00:34:29.103 IO depths : 1=0.3%, 2=5.0%, 4=66.4%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.103 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.103 issued rwts: total=13259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.103 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:29.103 filename1: (groupid=0, jobs=1): err= 0: pid=3106508: Tue Nov 19 13:26:31 2024 00:34:29.103 read: IOPS=2485, BW=19.4MiB/s (20.4MB/s)(97.1MiB/5001msec) 00:34:29.103 slat (nsec): min=6105, max=44858, avg=8752.64, stdev=3167.98 00:34:29.103 clat (usec): min=672, max=7130, avg=3192.76, stdev=470.76 00:34:29.103 lat (usec): min=684, max=7137, avg=3201.52, stdev=470.54 00:34:29.103 clat percentiles (usec): 00:34:29.103 | 1.00th=[ 2245], 5.00th=[ 2671], 10.00th=[ 2835], 20.00th=[ 2966], 00:34:29.103 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3130], 00:34:29.103 | 70.00th=[ 3261], 80.00th=[ 3392], 90.00th=[ 3720], 95.00th=[ 4146], 00:34:29.103 | 99.00th=[ 5014], 99.50th=[ 5276], 99.90th=[ 5538], 99.95th=[ 5669], 00:34:29.103 | 99.99th=[ 7111] 00:34:29.103 bw ( KiB/s): min=18624, max=20352, per=23.74%, avg=19817.78, stdev=540.60, samples=9 00:34:29.103 iops : min= 2328, max= 2544, avg=2477.22, stdev=67.58, samples=9 00:34:29.103 lat (usec) : 750=0.01%, 1000=0.03% 00:34:29.103 lat (msec) : 2=0.42%, 4=93.41%, 10=6.13% 00:34:29.103 cpu : usr=95.60%, sys=4.10%, ctx=9, majf=0, minf=9 00:34:29.103 IO depths : 1=0.1%, 2=2.6%, 4=69.9%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.103 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.103 issued rwts: total=12430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.103 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:29.103 00:34:29.103 Run status group 0 (all jobs): 00:34:29.103 READ: bw=81.5MiB/s (85.5MB/s), 19.4MiB/s-21.5MiB/s (20.4MB/s-22.6MB/s), io=408MiB (428MB), run=5001-5003msec 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.103 00:34:29.103 real 0m24.894s 00:34:29.103 user 4m53.913s 00:34:29.103 sys 0m5.305s 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:29.103 13:26:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.103 ************************************ 00:34:29.103 END TEST fio_dif_rand_params 00:34:29.103 ************************************ 00:34:29.103 13:26:31 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:29.103 13:26:31 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:29.103 13:26:31 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:29.103 13:26:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:29.103 ************************************ 00:34:29.103 START TEST fio_dif_digest 00:34:29.103 ************************************ 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:29.103 bdev_null0 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:29.103 [2024-11-19 13:26:31.569841] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:29.103 { 00:34:29.103 "params": { 00:34:29.103 "name": "Nvme$subsystem", 00:34:29.103 "trtype": "$TEST_TRANSPORT", 00:34:29.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:29.103 "adrfam": "ipv4", 00:34:29.103 "trsvcid": "$NVMF_PORT", 00:34:29.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:29.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:29.103 "hdgst": ${hdgst:-false}, 00:34:29.103 "ddgst": ${ddgst:-false} 00:34:29.103 }, 00:34:29.103 "method": "bdev_nvme_attach_controller" 00:34:29.103 } 00:34:29.103 EOF 00:34:29.103 )") 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:29.103 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.104 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:34:29.104 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:29.104 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.104 13:26:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:34:29.104 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:29.104 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.104 13:26:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:29.104 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:34:29.104 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:29.104 13:26:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:34:29.104 13:26:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:34:29.104 13:26:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:29.104 "params": { 00:34:29.104 "name": "Nvme0", 00:34:29.104 "trtype": "tcp", 00:34:29.104 "traddr": "10.0.0.2", 00:34:29.104 "adrfam": "ipv4", 00:34:29.104 "trsvcid": "4420", 00:34:29.104 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:29.104 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:29.104 "hdgst": true, 00:34:29.104 "ddgst": true 00:34:29.104 }, 00:34:29.104 "method": "bdev_nvme_attach_controller" 00:34:29.104 }' 00:34:29.104 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:29.104 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:29.104 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.104 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.104 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:29.104 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:29.104 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:29.104 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:29.104 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:29.104 13:26:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.104 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:29.104 ... 00:34:29.104 fio-3.35 00:34:29.104 Starting 3 threads 00:34:41.316 00:34:41.316 filename0: (groupid=0, jobs=1): err= 0: pid=3107665: Tue Nov 19 13:26:42 2024 00:34:41.316 read: IOPS=284, BW=35.6MiB/s (37.4MB/s)(358MiB/10046msec) 00:34:41.316 slat (nsec): min=6429, max=33549, avg=11560.81, stdev=1621.24 00:34:41.316 clat (usec): min=8169, max=50302, avg=10497.51, stdev=1249.90 00:34:41.316 lat (usec): min=8181, max=50314, avg=10509.07, stdev=1249.88 00:34:41.316 clat percentiles (usec): 00:34:41.316 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9503], 20.00th=[ 9896], 00:34:41.316 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:34:41.316 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11338], 95.00th=[11600], 00:34:41.316 | 99.00th=[12387], 99.50th=[12518], 99.90th=[12780], 99.95th=[49021], 00:34:41.316 | 99.99th=[50070] 00:34:41.316 bw ( KiB/s): min=35328, max=37376, per=34.87%, avg=36620.80, stdev=547.64, samples=20 00:34:41.316 iops : min= 276, max= 292, avg=286.10, stdev= 4.28, samples=20 00:34:41.316 lat (msec) : 10=24.73%, 20=75.20%, 50=0.03%, 100=0.03% 00:34:41.316 cpu : usr=94.54%, sys=5.15%, ctx=14, majf=0, minf=40 00:34:41.316 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:41.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.316 issued rwts: total=2863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.316 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:41.316 filename0: (groupid=0, jobs=1): err= 0: pid=3107666: Tue Nov 19 13:26:42 2024 00:34:41.316 read: IOPS=273, BW=34.2MiB/s (35.9MB/s)(344MiB/10045msec) 00:34:41.316 slat (nsec): min=6451, max=52583, avg=11526.27, stdev=1817.13 00:34:41.316 clat (usec): min=8582, max=51794, avg=10933.20, stdev=1279.60 00:34:41.316 lat (usec): min=8594, max=51804, avg=10944.73, stdev=1279.59 00:34:41.316 clat percentiles (usec): 00:34:41.316 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[ 9896], 20.00th=[10290], 00:34:41.316 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:34:41.316 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:34:41.316 | 99.00th=[12780], 99.50th=[12911], 99.90th=[14353], 99.95th=[46924], 00:34:41.316 | 99.99th=[51643] 00:34:41.316 bw ( KiB/s): min=34816, max=36096, per=33.48%, avg=35161.60, stdev=409.22, samples=20 00:34:41.316 iops : min= 272, max= 282, avg=274.70, stdev= 3.20, samples=20 00:34:41.316 lat (msec) : 10=11.31%, 20=88.61%, 50=0.04%, 100=0.04% 00:34:41.316 cpu : usr=94.54%, sys=5.14%, ctx=17, majf=0, minf=90 00:34:41.316 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:41.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.317 issued rwts: total=2749,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.317 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:41.317 filename0: (groupid=0, jobs=1): err= 0: pid=3107667: Tue Nov 19 13:26:42 2024 00:34:41.317 read: IOPS=261, BW=32.7MiB/s (34.3MB/s)(329MiB/10046msec) 00:34:41.317 slat (nsec): min=6432, max=28030, avg=11380.88, stdev=1573.36 00:34:41.317 clat (usec): min=8034, max=48305, avg=11429.90, stdev=1240.80 00:34:41.317 lat (usec): min=8045, max=48314, avg=11441.28, stdev=1240.78 00:34:41.317 clat percentiles (usec): 00:34:41.317 | 1.00th=[ 9634], 5.00th=[10159], 10.00th=[10421], 20.00th=[10814], 00:34:41.317 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:34:41.317 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12649], 00:34:41.317 | 99.00th=[13304], 99.50th=[13566], 99.90th=[14484], 99.95th=[45351], 00:34:41.317 | 99.99th=[48497] 00:34:41.317 bw ( KiB/s): min=32768, max=34304, per=32.02%, avg=33625.60, stdev=464.49, samples=20 00:34:41.317 iops : min= 256, max= 268, avg=262.70, stdev= 3.63, samples=20 00:34:41.317 lat (msec) : 10=3.16%, 20=96.77%, 50=0.08% 00:34:41.317 cpu : usr=94.34%, sys=5.36%, ctx=15, majf=0, minf=45 00:34:41.317 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:41.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.317 issued rwts: total=2630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.317 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:41.317 00:34:41.317 Run status group 0 (all jobs): 00:34:41.317 READ: bw=103MiB/s (108MB/s), 32.7MiB/s-35.6MiB/s (34.3MB/s-37.4MB/s), io=1030MiB (1080MB), run=10045-10046msec 00:34:41.317 13:26:42 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:41.317 13:26:42 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:41.317 13:26:42 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:41.317 13:26:42 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:41.317 13:26:42 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:41.317 13:26:42 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:41.317 13:26:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.317 13:26:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:41.317 13:26:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.317 13:26:42 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:41.317 13:26:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.317 13:26:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:41.317 13:26:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.317 00:34:41.317 real 0m11.167s 00:34:41.317 user 0m35.201s 00:34:41.317 sys 0m1.928s 00:34:41.317 13:26:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:41.317 13:26:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:41.317 ************************************ 00:34:41.317 END TEST fio_dif_digest 00:34:41.317 ************************************ 00:34:41.317 13:26:42 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:41.317 13:26:42 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:41.317 13:26:42 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:41.317 13:26:42 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:41.317 13:26:42 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:41.317 13:26:42 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:41.317 13:26:42 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:41.317 13:26:42 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:41.317 rmmod nvme_tcp 00:34:41.317 rmmod nvme_fabrics 00:34:41.317 rmmod nvme_keyring 00:34:41.317 13:26:42 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:41.317 13:26:42 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:41.317 13:26:42 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:41.317 13:26:42 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3099067 ']' 00:34:41.317 13:26:42 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3099067 00:34:41.317 13:26:42 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3099067 ']' 00:34:41.317 13:26:42 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3099067 00:34:41.317 13:26:42 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:41.317 13:26:42 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:41.317 13:26:42 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3099067 00:34:41.317 13:26:42 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:41.317 13:26:42 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:41.317 13:26:42 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3099067' 00:34:41.317 killing process with pid 3099067 00:34:41.317 13:26:42 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3099067 00:34:41.317 13:26:42 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3099067 00:34:41.317 13:26:43 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:41.317 13:26:43 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:42.698 Waiting for block devices as requested 00:34:42.698 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:42.698 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:42.698 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:42.698 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:42.957 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:42.957 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:42.958 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:43.217 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:43.217 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:43.217 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:43.217 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:43.476 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:43.476 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:43.476 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:43.735 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:43.736 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:43.736 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:43.995 13:26:47 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:43.995 13:26:47 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:43.995 13:26:47 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:43.995 13:26:47 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:43.995 13:26:47 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:43.995 13:26:47 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:43.995 13:26:47 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:43.995 13:26:47 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:43.995 13:26:47 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:43.995 13:26:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:43.995 13:26:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.903 13:26:49 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:45.903 00:34:45.903 real 1m14.674s 00:34:45.903 user 7m11.935s 00:34:45.903 sys 0m20.690s 00:34:45.903 13:26:49 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:45.903 13:26:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:45.903 ************************************ 00:34:45.903 END TEST nvmf_dif 00:34:45.903 ************************************ 00:34:45.903 13:26:49 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:45.903 13:26:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:45.903 13:26:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:45.903 13:26:49 -- common/autotest_common.sh@10 -- # set +x 00:34:45.903 ************************************ 00:34:45.903 START TEST nvmf_abort_qd_sizes 00:34:45.903 ************************************ 00:34:45.903 13:26:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:46.162 * Looking for test storage... 00:34:46.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:46.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:46.162 --rc genhtml_branch_coverage=1 00:34:46.162 --rc genhtml_function_coverage=1 00:34:46.162 --rc genhtml_legend=1 00:34:46.162 --rc geninfo_all_blocks=1 00:34:46.162 --rc geninfo_unexecuted_blocks=1 00:34:46.162 00:34:46.162 ' 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:46.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:46.162 --rc genhtml_branch_coverage=1 00:34:46.162 --rc genhtml_function_coverage=1 00:34:46.162 --rc genhtml_legend=1 00:34:46.162 --rc geninfo_all_blocks=1 00:34:46.162 --rc geninfo_unexecuted_blocks=1 00:34:46.162 00:34:46.162 ' 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:46.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:46.162 --rc genhtml_branch_coverage=1 00:34:46.162 --rc genhtml_function_coverage=1 00:34:46.162 --rc genhtml_legend=1 00:34:46.162 --rc geninfo_all_blocks=1 00:34:46.162 --rc geninfo_unexecuted_blocks=1 00:34:46.162 00:34:46.162 ' 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:46.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:46.162 --rc genhtml_branch_coverage=1 00:34:46.162 --rc genhtml_function_coverage=1 00:34:46.162 --rc genhtml_legend=1 00:34:46.162 --rc geninfo_all_blocks=1 00:34:46.162 --rc geninfo_unexecuted_blocks=1 00:34:46.162 00:34:46.162 ' 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:46.162 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:46.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:46.163 13:26:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:52.749 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:52.749 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:52.749 Found net devices under 0000:86:00.0: cvl_0_0 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:52.749 Found net devices under 0000:86:00.1: cvl_0_1 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:52.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:52.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:34:52.749 00:34:52.749 --- 10.0.0.2 ping statistics --- 00:34:52.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:52.749 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:52.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:52.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:34:52.749 00:34:52.749 --- 10.0.0.1 ping statistics --- 00:34:52.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:52.749 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:52.749 13:26:55 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:55.287 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:55.287 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:55.287 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:55.287 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:55.287 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:55.287 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:55.287 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:55.287 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:55.287 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:55.287 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:55.287 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:55.287 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:55.287 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:55.287 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:55.287 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:55.287 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:55.855 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:55.855 13:26:59 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:55.855 13:26:59 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:55.855 13:26:59 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:55.855 13:26:59 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:55.855 13:26:59 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:55.855 13:26:59 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:55.855 13:26:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:55.855 13:26:59 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:55.855 13:26:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:55.855 13:26:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:55.855 13:26:59 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3115472 00:34:55.855 13:26:59 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3115472 00:34:55.855 13:26:59 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:55.855 13:26:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3115472 ']' 00:34:55.855 13:26:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:55.855 13:26:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:55.855 13:26:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:55.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:55.855 13:26:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:55.855 13:26:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:56.113 [2024-11-19 13:26:59.234570] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:34:56.113 [2024-11-19 13:26:59.234618] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:56.113 [2024-11-19 13:26:59.315929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:56.113 [2024-11-19 13:26:59.359241] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:56.113 [2024-11-19 13:26:59.359277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:56.113 [2024-11-19 13:26:59.359285] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:56.113 [2024-11-19 13:26:59.359291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:56.113 [2024-11-19 13:26:59.359296] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:56.113 [2024-11-19 13:26:59.360772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.113 [2024-11-19 13:26:59.360904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:56.113 [2024-11-19 13:26:59.360944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.113 [2024-11-19 13:26:59.360945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:56.113 13:26:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:56.113 13:26:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:56.113 13:26:59 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:56.113 13:26:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:56.113 13:26:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:56.372 13:26:59 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:56.372 13:26:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:56.372 13:26:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:56.372 13:26:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:56.372 13:26:59 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:56.372 13:26:59 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:56.372 13:26:59 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:34:56.372 13:26:59 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:56.372 13:26:59 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:56.372 13:26:59 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:56.372 13:26:59 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:56.372 13:26:59 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:56.372 13:26:59 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:56.372 13:26:59 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:56.372 13:26:59 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:34:56.372 13:26:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:56.372 13:26:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:34:56.372 13:26:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:56.372 13:26:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:56.372 13:26:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:56.372 13:26:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:56.372 ************************************ 00:34:56.372 START TEST spdk_target_abort 00:34:56.372 ************************************ 00:34:56.372 13:26:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:56.372 13:26:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:56.372 13:26:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:56.372 13:26:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.372 13:26:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:59.662 spdk_targetn1 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:59.662 [2024-11-19 13:27:02.378141] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:59.662 [2024-11-19 13:27:02.423255] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:59.662 13:27:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:02.950 Initializing NVMe Controllers 00:35:02.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:02.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:02.950 Initialization complete. Launching workers. 00:35:02.950 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15493, failed: 0 00:35:02.950 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1243, failed to submit 14250 00:35:02.950 success 746, unsuccessful 497, failed 0 00:35:02.950 13:27:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:02.950 13:27:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:06.240 Initializing NVMe Controllers 00:35:06.240 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:06.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:06.240 Initialization complete. Launching workers. 00:35:06.240 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8614, failed: 0 00:35:06.240 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1271, failed to submit 7343 00:35:06.240 success 311, unsuccessful 960, failed 0 00:35:06.240 13:27:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:06.240 13:27:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:09.526 Initializing NVMe Controllers 00:35:09.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:09.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:09.526 Initialization complete. Launching workers. 00:35:09.526 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37495, failed: 0 00:35:09.526 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2791, failed to submit 34704 00:35:09.526 success 609, unsuccessful 2182, failed 0 00:35:09.526 13:27:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:09.526 13:27:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.526 13:27:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:09.526 13:27:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.526 13:27:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:09.526 13:27:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.526 13:27:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:10.463 13:27:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.463 13:27:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3115472 00:35:10.463 13:27:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3115472 ']' 00:35:10.463 13:27:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3115472 00:35:10.463 13:27:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:35:10.463 13:27:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:10.463 13:27:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3115472 00:35:10.463 13:27:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:10.463 13:27:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:10.463 13:27:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3115472' 00:35:10.463 killing process with pid 3115472 00:35:10.463 13:27:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3115472 00:35:10.463 13:27:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3115472 00:35:10.463 00:35:10.463 real 0m14.232s 00:35:10.463 user 0m54.160s 00:35:10.463 sys 0m2.673s 00:35:10.463 13:27:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:10.463 13:27:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:10.463 ************************************ 00:35:10.463 END TEST spdk_target_abort 00:35:10.463 ************************************ 00:35:10.463 13:27:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:10.463 13:27:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:10.463 13:27:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:10.463 13:27:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:10.722 ************************************ 00:35:10.722 START TEST kernel_target_abort 00:35:10.722 ************************************ 00:35:10.722 13:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:35:10.722 13:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:10.722 13:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:35:10.722 13:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.722 13:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.722 13:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.722 13:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.722 13:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:10.723 13:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.723 13:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:10.723 13:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:10.723 13:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:10.723 13:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:10.723 13:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:10.723 13:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:10.723 13:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:10.723 13:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:10.723 13:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:10.723 13:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:35:10.723 13:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:10.723 13:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:10.723 13:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:10.723 13:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:13.259 Waiting for block devices as requested 00:35:13.259 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:13.518 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:13.518 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:13.519 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:13.778 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:13.778 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:13.778 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:14.037 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:14.037 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:14.037 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:14.037 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:14.296 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:14.296 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:14.296 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:14.555 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:14.555 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:14.555 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:14.815 13:27:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:14.815 13:27:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:14.815 13:27:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:14.815 13:27:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:14.815 13:27:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:14.815 13:27:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:14.815 13:27:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:14.815 13:27:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:14.815 13:27:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:14.815 No valid GPT data, bailing 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:14.815 00:35:14.815 Discovery Log Number of Records 2, Generation counter 2 00:35:14.815 =====Discovery Log Entry 0====== 00:35:14.815 trtype: tcp 00:35:14.815 adrfam: ipv4 00:35:14.815 subtype: current discovery subsystem 00:35:14.815 treq: not specified, sq flow control disable supported 00:35:14.815 portid: 1 00:35:14.815 trsvcid: 4420 00:35:14.815 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:14.815 traddr: 10.0.0.1 00:35:14.815 eflags: none 00:35:14.815 sectype: none 00:35:14.815 =====Discovery Log Entry 1====== 00:35:14.815 trtype: tcp 00:35:14.815 adrfam: ipv4 00:35:14.815 subtype: nvme subsystem 00:35:14.815 treq: not specified, sq flow control disable supported 00:35:14.815 portid: 1 00:35:14.815 trsvcid: 4420 00:35:14.815 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:14.815 traddr: 10.0.0.1 00:35:14.815 eflags: none 00:35:14.815 sectype: none 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:14.815 13:27:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:18.104 Initializing NVMe Controllers 00:35:18.104 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:18.104 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:18.104 Initialization complete. Launching workers. 00:35:18.104 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 91920, failed: 0 00:35:18.104 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 91920, failed to submit 0 00:35:18.104 success 0, unsuccessful 91920, failed 0 00:35:18.104 13:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:18.104 13:27:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:21.393 Initializing NVMe Controllers 00:35:21.393 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:21.393 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:21.393 Initialization complete. Launching workers. 00:35:21.393 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145888, failed: 0 00:35:21.393 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36550, failed to submit 109338 00:35:21.393 success 0, unsuccessful 36550, failed 0 00:35:21.393 13:27:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:21.393 13:27:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:24.685 Initializing NVMe Controllers 00:35:24.685 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:24.685 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:24.685 Initialization complete. Launching workers. 00:35:24.685 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 137905, failed: 0 00:35:24.685 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34550, failed to submit 103355 00:35:24.685 success 0, unsuccessful 34550, failed 0 00:35:24.685 13:27:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:24.685 13:27:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:24.685 13:27:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:35:24.685 13:27:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:24.685 13:27:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:24.685 13:27:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:24.685 13:27:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:24.685 13:27:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:24.685 13:27:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:24.685 13:27:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:27.226 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:27.226 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:27.226 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:27.226 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:27.226 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:27.226 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:27.226 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:27.226 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:27.226 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:27.226 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:27.226 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:27.226 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:27.226 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:27.226 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:27.226 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:27.226 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:28.167 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:28.167 00:35:28.167 real 0m17.462s 00:35:28.167 user 0m9.166s 00:35:28.167 sys 0m4.996s 00:35:28.167 13:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:28.167 13:27:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:28.167 ************************************ 00:35:28.167 END TEST kernel_target_abort 00:35:28.167 ************************************ 00:35:28.167 13:27:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:28.167 13:27:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:28.167 13:27:31 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:28.167 13:27:31 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:35:28.167 13:27:31 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:28.167 13:27:31 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:35:28.167 13:27:31 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:28.167 13:27:31 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:28.167 rmmod nvme_tcp 00:35:28.167 rmmod nvme_fabrics 00:35:28.167 rmmod nvme_keyring 00:35:28.167 13:27:31 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:28.167 13:27:31 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:35:28.167 13:27:31 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:35:28.167 13:27:31 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3115472 ']' 00:35:28.167 13:27:31 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3115472 00:35:28.167 13:27:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3115472 ']' 00:35:28.167 13:27:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3115472 00:35:28.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3115472) - No such process 00:35:28.167 13:27:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3115472 is not found' 00:35:28.167 Process with pid 3115472 is not found 00:35:28.167 13:27:31 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:28.167 13:27:31 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:30.709 Waiting for block devices as requested 00:35:30.969 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:30.969 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:31.229 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:31.229 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:31.229 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:31.229 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:31.489 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:31.489 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:31.489 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:31.749 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:31.749 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:31.749 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:31.749 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:32.009 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:32.009 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:32.009 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:32.269 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:32.269 13:27:35 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:32.269 13:27:35 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:32.269 13:27:35 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:32.269 13:27:35 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:32.269 13:27:35 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:32.269 13:27:35 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:32.269 13:27:35 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:32.269 13:27:35 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:32.269 13:27:35 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:32.269 13:27:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:32.269 13:27:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:34.812 13:27:37 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:34.812 00:35:34.812 real 0m48.322s 00:35:34.812 user 1m7.700s 00:35:34.812 sys 0m16.391s 00:35:34.812 13:27:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:34.812 13:27:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:34.812 ************************************ 00:35:34.812 END TEST nvmf_abort_qd_sizes 00:35:34.812 ************************************ 00:35:34.812 13:27:37 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:34.812 13:27:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:34.812 13:27:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:34.812 13:27:37 -- common/autotest_common.sh@10 -- # set +x 00:35:34.812 ************************************ 00:35:34.812 START TEST keyring_file 00:35:34.812 ************************************ 00:35:34.812 13:27:37 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:34.812 * Looking for test storage... 00:35:34.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:34.812 13:27:37 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:34.812 13:27:37 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:35:34.812 13:27:37 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:34.812 13:27:37 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:34.812 13:27:37 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:34.812 13:27:37 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:34.812 13:27:37 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:34.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.812 --rc genhtml_branch_coverage=1 00:35:34.812 --rc genhtml_function_coverage=1 00:35:34.812 --rc genhtml_legend=1 00:35:34.812 --rc geninfo_all_blocks=1 00:35:34.812 --rc geninfo_unexecuted_blocks=1 00:35:34.812 00:35:34.812 ' 00:35:34.812 13:27:37 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:34.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.812 --rc genhtml_branch_coverage=1 00:35:34.812 --rc genhtml_function_coverage=1 00:35:34.812 --rc genhtml_legend=1 00:35:34.812 --rc geninfo_all_blocks=1 00:35:34.812 --rc geninfo_unexecuted_blocks=1 00:35:34.812 00:35:34.812 ' 00:35:34.812 13:27:37 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:34.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.812 --rc genhtml_branch_coverage=1 00:35:34.812 --rc genhtml_function_coverage=1 00:35:34.812 --rc genhtml_legend=1 00:35:34.812 --rc geninfo_all_blocks=1 00:35:34.812 --rc geninfo_unexecuted_blocks=1 00:35:34.812 00:35:34.812 ' 00:35:34.813 13:27:37 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:34.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.813 --rc genhtml_branch_coverage=1 00:35:34.813 --rc genhtml_function_coverage=1 00:35:34.813 --rc genhtml_legend=1 00:35:34.813 --rc geninfo_all_blocks=1 00:35:34.813 --rc geninfo_unexecuted_blocks=1 00:35:34.813 00:35:34.813 ' 00:35:34.813 13:27:37 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:34.813 13:27:37 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:34.813 13:27:37 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:34.813 13:27:37 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:34.813 13:27:37 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:34.813 13:27:37 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:34.813 13:27:37 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.813 13:27:37 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.813 13:27:37 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.813 13:27:37 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:34.813 13:27:37 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:34.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:34.813 13:27:37 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:34.813 13:27:37 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:34.813 13:27:37 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:34.813 13:27:37 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:34.813 13:27:37 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:34.813 13:27:37 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:34.813 13:27:37 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:34.813 13:27:37 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:34.813 13:27:37 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:34.813 13:27:37 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:34.813 13:27:37 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:34.813 13:27:37 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:34.813 13:27:37 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uMzA7ENKq6 00:35:34.813 13:27:37 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:34.813 13:27:37 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uMzA7ENKq6 00:35:34.813 13:27:37 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uMzA7ENKq6 00:35:34.813 13:27:37 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.uMzA7ENKq6 00:35:34.813 13:27:37 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:34.813 13:27:37 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:34.813 13:27:37 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:34.813 13:27:37 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:34.813 13:27:37 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:34.813 13:27:37 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:34.813 13:27:37 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.c2FRLFNPkf 00:35:34.813 13:27:37 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:34.813 13:27:37 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:34.813 13:27:37 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.c2FRLFNPkf 00:35:34.813 13:27:37 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.c2FRLFNPkf 00:35:34.813 13:27:37 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.c2FRLFNPkf 00:35:34.813 13:27:37 keyring_file -- keyring/file.sh@30 -- # tgtpid=3124758 00:35:34.814 13:27:37 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:34.814 13:27:37 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3124758 00:35:34.814 13:27:37 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3124758 ']' 00:35:34.814 13:27:37 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:34.814 13:27:37 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:34.814 13:27:37 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:34.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:34.814 13:27:37 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:34.814 13:27:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:34.814 [2024-11-19 13:27:38.013500] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:35:34.814 [2024-11-19 13:27:38.013549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3124758 ] 00:35:34.814 [2024-11-19 13:27:38.089858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:34.814 [2024-11-19 13:27:38.132245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:35.074 13:27:38 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:35.074 13:27:38 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:35.074 13:27:38 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:35.074 13:27:38 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.074 13:27:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:35.074 [2024-11-19 13:27:38.356360] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:35.074 null0 00:35:35.074 [2024-11-19 13:27:38.388413] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:35.074 [2024-11-19 13:27:38.388735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:35.074 13:27:38 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.074 13:27:38 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:35.074 13:27:38 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:35.074 13:27:38 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:35.074 13:27:38 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:35.074 13:27:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:35.074 13:27:38 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:35.074 13:27:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:35.074 13:27:38 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:35.074 13:27:38 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.074 13:27:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:35.074 [2024-11-19 13:27:38.416485] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:35.074 request: 00:35:35.074 { 00:35:35.074 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:35.074 "secure_channel": false, 00:35:35.075 "listen_address": { 00:35:35.075 "trtype": "tcp", 00:35:35.075 "traddr": "127.0.0.1", 00:35:35.075 "trsvcid": "4420" 00:35:35.075 }, 00:35:35.075 "method": "nvmf_subsystem_add_listener", 00:35:35.075 "req_id": 1 00:35:35.075 } 00:35:35.075 Got JSON-RPC error response 00:35:35.075 response: 00:35:35.075 { 00:35:35.075 "code": -32602, 00:35:35.075 "message": "Invalid parameters" 00:35:35.075 } 00:35:35.075 13:27:38 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:35.075 13:27:38 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:35.075 13:27:38 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:35.075 13:27:38 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:35.075 13:27:38 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:35.075 13:27:38 keyring_file -- keyring/file.sh@47 -- # bperfpid=3124768 00:35:35.075 13:27:38 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:35.075 13:27:38 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3124768 /var/tmp/bperf.sock 00:35:35.075 13:27:38 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3124768 ']' 00:35:35.075 13:27:38 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:35.075 13:27:38 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:35.075 13:27:38 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:35.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:35.075 13:27:38 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:35.075 13:27:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:35.335 [2024-11-19 13:27:38.468419] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:35:35.335 [2024-11-19 13:27:38.468460] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3124768 ] 00:35:35.335 [2024-11-19 13:27:38.542204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:35.335 [2024-11-19 13:27:38.585232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:35.335 13:27:38 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:35.335 13:27:38 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:35.335 13:27:38 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uMzA7ENKq6 00:35:35.335 13:27:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uMzA7ENKq6 00:35:35.594 13:27:38 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.c2FRLFNPkf 00:35:35.594 13:27:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.c2FRLFNPkf 00:35:35.854 13:27:39 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:35.854 13:27:39 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:35.854 13:27:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:35.854 13:27:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:35.854 13:27:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.114 13:27:39 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.uMzA7ENKq6 == \/\t\m\p\/\t\m\p\.\u\M\z\A\7\E\N\K\q\6 ]] 00:35:36.114 13:27:39 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:36.114 13:27:39 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:36.114 13:27:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.114 13:27:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.114 13:27:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:36.374 13:27:39 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.c2FRLFNPkf == \/\t\m\p\/\t\m\p\.\c\2\F\R\L\F\N\P\k\f ]] 00:35:36.374 13:27:39 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:36.374 13:27:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:36.374 13:27:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:36.374 13:27:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.374 13:27:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:36.374 13:27:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.374 13:27:39 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:36.374 13:27:39 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:36.374 13:27:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:36.374 13:27:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:36.374 13:27:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.374 13:27:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:36.374 13:27:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.634 13:27:39 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:36.634 13:27:39 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:36.634 13:27:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:36.893 [2024-11-19 13:27:40.108352] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:36.893 nvme0n1 00:35:36.893 13:27:40 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:36.893 13:27:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:36.893 13:27:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:36.893 13:27:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.893 13:27:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:36.893 13:27:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:37.153 13:27:40 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:37.153 13:27:40 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:37.153 13:27:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:37.153 13:27:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:37.153 13:27:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:37.153 13:27:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:37.153 13:27:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:37.413 13:27:40 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:37.413 13:27:40 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:37.413 Running I/O for 1 seconds... 00:35:38.354 18742.00 IOPS, 73.21 MiB/s 00:35:38.354 Latency(us) 00:35:38.354 [2024-11-19T12:27:41.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:38.354 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:38.354 nvme0n1 : 1.00 18787.56 73.39 0.00 0.00 6799.93 2820.90 15044.79 00:35:38.354 [2024-11-19T12:27:41.731Z] =================================================================================================================== 00:35:38.354 [2024-11-19T12:27:41.731Z] Total : 18787.56 73.39 0.00 0.00 6799.93 2820.90 15044.79 00:35:38.354 { 00:35:38.354 "results": [ 00:35:38.354 { 00:35:38.354 "job": "nvme0n1", 00:35:38.354 "core_mask": "0x2", 00:35:38.354 "workload": "randrw", 00:35:38.354 "percentage": 50, 00:35:38.354 "status": "finished", 00:35:38.354 "queue_depth": 128, 00:35:38.354 "io_size": 4096, 00:35:38.354 "runtime": 1.004441, 00:35:38.354 "iops": 18787.564426382436, 00:35:38.354 "mibps": 73.38892354055639, 00:35:38.354 "io_failed": 0, 00:35:38.354 "io_timeout": 0, 00:35:38.354 "avg_latency_us": 6799.92830038269, 00:35:38.354 "min_latency_us": 2820.897391304348, 00:35:38.354 "max_latency_us": 15044.786086956521 00:35:38.354 } 00:35:38.354 ], 00:35:38.354 "core_count": 1 00:35:38.354 } 00:35:38.614 13:27:41 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:38.614 13:27:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:38.614 13:27:41 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:38.614 13:27:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:38.614 13:27:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:38.614 13:27:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:38.614 13:27:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:38.614 13:27:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:38.873 13:27:42 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:38.873 13:27:42 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:38.873 13:27:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:38.873 13:27:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:38.873 13:27:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:38.873 13:27:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:38.874 13:27:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:39.133 13:27:42 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:39.133 13:27:42 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:39.133 13:27:42 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:39.133 13:27:42 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:39.133 13:27:42 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:39.133 13:27:42 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:39.133 13:27:42 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:39.133 13:27:42 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:39.133 13:27:42 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:39.133 13:27:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:39.393 [2024-11-19 13:27:42.521002] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:39.393 [2024-11-19 13:27:42.521319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x162fd00 (107): Transport endpoint is not connected 00:35:39.393 [2024-11-19 13:27:42.522312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x162fd00 (9): Bad file descriptor 00:35:39.393 [2024-11-19 13:27:42.523314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:39.393 [2024-11-19 13:27:42.523323] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:39.393 [2024-11-19 13:27:42.523331] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:39.393 [2024-11-19 13:27:42.523339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:39.393 request: 00:35:39.393 { 00:35:39.393 "name": "nvme0", 00:35:39.393 "trtype": "tcp", 00:35:39.393 "traddr": "127.0.0.1", 00:35:39.393 "adrfam": "ipv4", 00:35:39.393 "trsvcid": "4420", 00:35:39.393 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:39.393 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:39.393 "prchk_reftag": false, 00:35:39.393 "prchk_guard": false, 00:35:39.393 "hdgst": false, 00:35:39.393 "ddgst": false, 00:35:39.393 "psk": "key1", 00:35:39.393 "allow_unrecognized_csi": false, 00:35:39.393 "method": "bdev_nvme_attach_controller", 00:35:39.393 "req_id": 1 00:35:39.393 } 00:35:39.393 Got JSON-RPC error response 00:35:39.393 response: 00:35:39.393 { 00:35:39.393 "code": -5, 00:35:39.393 "message": "Input/output error" 00:35:39.393 } 00:35:39.393 13:27:42 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:39.393 13:27:42 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:39.393 13:27:42 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:39.393 13:27:42 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:39.393 13:27:42 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:39.393 13:27:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:39.393 13:27:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:39.393 13:27:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:39.393 13:27:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:39.393 13:27:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:39.393 13:27:42 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:39.393 13:27:42 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:39.393 13:27:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:39.393 13:27:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:39.393 13:27:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:39.393 13:27:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:39.393 13:27:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:39.653 13:27:42 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:39.653 13:27:42 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:39.654 13:27:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:40.008 13:27:43 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:40.008 13:27:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:40.008 13:27:43 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:40.008 13:27:43 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:40.008 13:27:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:40.297 13:27:43 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:40.297 13:27:43 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.uMzA7ENKq6 00:35:40.297 13:27:43 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.uMzA7ENKq6 00:35:40.297 13:27:43 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:40.297 13:27:43 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.uMzA7ENKq6 00:35:40.297 13:27:43 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:40.297 13:27:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:40.297 13:27:43 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:40.297 13:27:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:40.297 13:27:43 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uMzA7ENKq6 00:35:40.297 13:27:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uMzA7ENKq6 00:35:40.608 [2024-11-19 13:27:43.706448] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.uMzA7ENKq6': 0100660 00:35:40.608 [2024-11-19 13:27:43.706476] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:40.608 request: 00:35:40.608 { 00:35:40.608 "name": "key0", 00:35:40.608 "path": "/tmp/tmp.uMzA7ENKq6", 00:35:40.608 "method": "keyring_file_add_key", 00:35:40.608 "req_id": 1 00:35:40.608 } 00:35:40.608 Got JSON-RPC error response 00:35:40.608 response: 00:35:40.608 { 00:35:40.608 "code": -1, 00:35:40.608 "message": "Operation not permitted" 00:35:40.608 } 00:35:40.609 13:27:43 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:40.609 13:27:43 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:40.609 13:27:43 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:40.609 13:27:43 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:40.609 13:27:43 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.uMzA7ENKq6 00:35:40.609 13:27:43 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uMzA7ENKq6 00:35:40.609 13:27:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uMzA7ENKq6 00:35:40.609 13:27:43 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.uMzA7ENKq6 00:35:40.609 13:27:43 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:40.609 13:27:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:40.609 13:27:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:40.609 13:27:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:40.609 13:27:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:40.609 13:27:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:40.886 13:27:44 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:40.886 13:27:44 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:40.886 13:27:44 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:40.886 13:27:44 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:40.886 13:27:44 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:40.886 13:27:44 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:40.886 13:27:44 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:40.886 13:27:44 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:40.886 13:27:44 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:40.886 13:27:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:41.145 [2024-11-19 13:27:44.304028] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.uMzA7ENKq6': No such file or directory 00:35:41.145 [2024-11-19 13:27:44.304049] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:41.145 [2024-11-19 13:27:44.304064] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:41.145 [2024-11-19 13:27:44.304071] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:41.145 [2024-11-19 13:27:44.304078] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:41.145 [2024-11-19 13:27:44.304084] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:41.145 request: 00:35:41.145 { 00:35:41.145 "name": "nvme0", 00:35:41.145 "trtype": "tcp", 00:35:41.145 "traddr": "127.0.0.1", 00:35:41.145 "adrfam": "ipv4", 00:35:41.145 "trsvcid": "4420", 00:35:41.145 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:41.145 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:41.145 "prchk_reftag": false, 00:35:41.145 "prchk_guard": false, 00:35:41.145 "hdgst": false, 00:35:41.145 "ddgst": false, 00:35:41.145 "psk": "key0", 00:35:41.145 "allow_unrecognized_csi": false, 00:35:41.145 "method": "bdev_nvme_attach_controller", 00:35:41.145 "req_id": 1 00:35:41.145 } 00:35:41.145 Got JSON-RPC error response 00:35:41.145 response: 00:35:41.145 { 00:35:41.145 "code": -19, 00:35:41.145 "message": "No such device" 00:35:41.145 } 00:35:41.145 13:27:44 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:41.145 13:27:44 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:41.145 13:27:44 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:41.145 13:27:44 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:41.145 13:27:44 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:41.145 13:27:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:41.404 13:27:44 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:41.404 13:27:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:41.404 13:27:44 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:41.405 13:27:44 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:41.405 13:27:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:41.405 13:27:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:41.405 13:27:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1pkYHt0yOl 00:35:41.405 13:27:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:41.405 13:27:44 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:41.405 13:27:44 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:41.405 13:27:44 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:41.405 13:27:44 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:41.405 13:27:44 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:41.405 13:27:44 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:41.405 13:27:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1pkYHt0yOl 00:35:41.405 13:27:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1pkYHt0yOl 00:35:41.405 13:27:44 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.1pkYHt0yOl 00:35:41.405 13:27:44 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1pkYHt0yOl 00:35:41.405 13:27:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1pkYHt0yOl 00:35:41.664 13:27:44 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:41.664 13:27:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:41.664 nvme0n1 00:35:41.923 13:27:45 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:41.923 13:27:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:41.923 13:27:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:41.923 13:27:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:41.923 13:27:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:41.923 13:27:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:41.923 13:27:45 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:41.923 13:27:45 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:41.923 13:27:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:42.182 13:27:45 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:42.182 13:27:45 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:42.182 13:27:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:42.182 13:27:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:42.182 13:27:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:42.439 13:27:45 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:42.439 13:27:45 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:42.439 13:27:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:42.439 13:27:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:42.439 13:27:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:42.439 13:27:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:42.439 13:27:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:42.698 13:27:45 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:42.698 13:27:45 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:42.698 13:27:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:42.957 13:27:46 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:42.957 13:27:46 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:42.957 13:27:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:42.957 13:27:46 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:42.957 13:27:46 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1pkYHt0yOl 00:35:42.957 13:27:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1pkYHt0yOl 00:35:43.216 13:27:46 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.c2FRLFNPkf 00:35:43.216 13:27:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.c2FRLFNPkf 00:35:43.476 13:27:46 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:43.476 13:27:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:43.735 nvme0n1 00:35:43.735 13:27:46 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:43.735 13:27:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:43.994 13:27:47 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:43.994 "subsystems": [ 00:35:43.994 { 00:35:43.994 "subsystem": "keyring", 00:35:43.994 "config": [ 00:35:43.994 { 00:35:43.994 "method": "keyring_file_add_key", 00:35:43.994 "params": { 00:35:43.994 "name": "key0", 00:35:43.994 "path": "/tmp/tmp.1pkYHt0yOl" 00:35:43.994 } 00:35:43.994 }, 00:35:43.994 { 00:35:43.994 "method": "keyring_file_add_key", 00:35:43.994 "params": { 00:35:43.994 "name": "key1", 00:35:43.994 "path": "/tmp/tmp.c2FRLFNPkf" 00:35:43.994 } 00:35:43.994 } 00:35:43.994 ] 00:35:43.994 }, 00:35:43.994 { 00:35:43.994 "subsystem": "iobuf", 00:35:43.994 "config": [ 00:35:43.994 { 00:35:43.994 "method": "iobuf_set_options", 00:35:43.994 "params": { 00:35:43.994 "small_pool_count": 8192, 00:35:43.994 "large_pool_count": 1024, 00:35:43.994 "small_bufsize": 8192, 00:35:43.994 "large_bufsize": 135168, 00:35:43.994 "enable_numa": false 00:35:43.994 } 00:35:43.994 } 00:35:43.994 ] 00:35:43.994 }, 00:35:43.994 { 00:35:43.994 "subsystem": "sock", 00:35:43.994 "config": [ 00:35:43.994 { 00:35:43.994 "method": "sock_set_default_impl", 00:35:43.994 "params": { 00:35:43.994 "impl_name": "posix" 00:35:43.994 } 00:35:43.994 }, 00:35:43.994 { 00:35:43.994 "method": "sock_impl_set_options", 00:35:43.994 "params": { 00:35:43.994 "impl_name": "ssl", 00:35:43.994 "recv_buf_size": 4096, 00:35:43.994 "send_buf_size": 4096, 00:35:43.994 "enable_recv_pipe": true, 00:35:43.994 "enable_quickack": false, 00:35:43.994 "enable_placement_id": 0, 00:35:43.994 "enable_zerocopy_send_server": true, 00:35:43.994 "enable_zerocopy_send_client": false, 00:35:43.994 "zerocopy_threshold": 0, 00:35:43.995 "tls_version": 0, 00:35:43.995 "enable_ktls": false 00:35:43.995 } 00:35:43.995 }, 00:35:43.995 { 00:35:43.995 "method": "sock_impl_set_options", 00:35:43.995 "params": { 00:35:43.995 "impl_name": "posix", 00:35:43.995 "recv_buf_size": 2097152, 00:35:43.995 "send_buf_size": 2097152, 00:35:43.995 "enable_recv_pipe": true, 00:35:43.995 "enable_quickack": false, 00:35:43.995 "enable_placement_id": 0, 00:35:43.995 "enable_zerocopy_send_server": true, 00:35:43.995 "enable_zerocopy_send_client": false, 00:35:43.995 "zerocopy_threshold": 0, 00:35:43.995 "tls_version": 0, 00:35:43.995 "enable_ktls": false 00:35:43.995 } 00:35:43.995 } 00:35:43.995 ] 00:35:43.995 }, 00:35:43.995 { 00:35:43.995 "subsystem": "vmd", 00:35:43.995 "config": [] 00:35:43.995 }, 00:35:43.995 { 00:35:43.995 "subsystem": "accel", 00:35:43.995 "config": [ 00:35:43.995 { 00:35:43.995 "method": "accel_set_options", 00:35:43.995 "params": { 00:35:43.995 "small_cache_size": 128, 00:35:43.995 "large_cache_size": 16, 00:35:43.995 "task_count": 2048, 00:35:43.995 "sequence_count": 2048, 00:35:43.995 "buf_count": 2048 00:35:43.995 } 00:35:43.995 } 00:35:43.995 ] 00:35:43.995 }, 00:35:43.995 { 00:35:43.995 "subsystem": "bdev", 00:35:43.995 "config": [ 00:35:43.995 { 00:35:43.995 "method": "bdev_set_options", 00:35:43.995 "params": { 00:35:43.995 "bdev_io_pool_size": 65535, 00:35:43.995 "bdev_io_cache_size": 256, 00:35:43.995 "bdev_auto_examine": true, 00:35:43.995 "iobuf_small_cache_size": 128, 00:35:43.995 "iobuf_large_cache_size": 16 00:35:43.995 } 00:35:43.995 }, 00:35:43.995 { 00:35:43.995 "method": "bdev_raid_set_options", 00:35:43.995 "params": { 00:35:43.995 "process_window_size_kb": 1024, 00:35:43.995 "process_max_bandwidth_mb_sec": 0 00:35:43.995 } 00:35:43.995 }, 00:35:43.995 { 00:35:43.995 "method": "bdev_iscsi_set_options", 00:35:43.995 "params": { 00:35:43.995 "timeout_sec": 30 00:35:43.995 } 00:35:43.995 }, 00:35:43.995 { 00:35:43.995 "method": "bdev_nvme_set_options", 00:35:43.995 "params": { 00:35:43.995 "action_on_timeout": "none", 00:35:43.995 "timeout_us": 0, 00:35:43.995 "timeout_admin_us": 0, 00:35:43.995 "keep_alive_timeout_ms": 10000, 00:35:43.995 "arbitration_burst": 0, 00:35:43.995 "low_priority_weight": 0, 00:35:43.995 "medium_priority_weight": 0, 00:35:43.995 "high_priority_weight": 0, 00:35:43.995 "nvme_adminq_poll_period_us": 10000, 00:35:43.995 "nvme_ioq_poll_period_us": 0, 00:35:43.995 "io_queue_requests": 512, 00:35:43.995 "delay_cmd_submit": true, 00:35:43.995 "transport_retry_count": 4, 00:35:43.995 "bdev_retry_count": 3, 00:35:43.995 "transport_ack_timeout": 0, 00:35:43.995 "ctrlr_loss_timeout_sec": 0, 00:35:43.995 "reconnect_delay_sec": 0, 00:35:43.995 "fast_io_fail_timeout_sec": 0, 00:35:43.995 "disable_auto_failback": false, 00:35:43.995 "generate_uuids": false, 00:35:43.995 "transport_tos": 0, 00:35:43.995 "nvme_error_stat": false, 00:35:43.995 "rdma_srq_size": 0, 00:35:43.995 "io_path_stat": false, 00:35:43.995 "allow_accel_sequence": false, 00:35:43.995 "rdma_max_cq_size": 0, 00:35:43.995 "rdma_cm_event_timeout_ms": 0, 00:35:43.995 "dhchap_digests": [ 00:35:43.995 "sha256", 00:35:43.995 "sha384", 00:35:43.995 "sha512" 00:35:43.995 ], 00:35:43.995 "dhchap_dhgroups": [ 00:35:43.995 "null", 00:35:43.995 "ffdhe2048", 00:35:43.995 "ffdhe3072", 00:35:43.995 "ffdhe4096", 00:35:43.995 "ffdhe6144", 00:35:43.995 "ffdhe8192" 00:35:43.995 ] 00:35:43.995 } 00:35:43.995 }, 00:35:43.995 { 00:35:43.995 "method": "bdev_nvme_attach_controller", 00:35:43.995 "params": { 00:35:43.995 "name": "nvme0", 00:35:43.995 "trtype": "TCP", 00:35:43.995 "adrfam": "IPv4", 00:35:43.995 "traddr": "127.0.0.1", 00:35:43.995 "trsvcid": "4420", 00:35:43.995 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:43.995 "prchk_reftag": false, 00:35:43.995 "prchk_guard": false, 00:35:43.995 "ctrlr_loss_timeout_sec": 0, 00:35:43.995 "reconnect_delay_sec": 0, 00:35:43.995 "fast_io_fail_timeout_sec": 0, 00:35:43.995 "psk": "key0", 00:35:43.995 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:43.995 "hdgst": false, 00:35:43.995 "ddgst": false, 00:35:43.995 "multipath": "multipath" 00:35:43.995 } 00:35:43.995 }, 00:35:43.995 { 00:35:43.995 "method": "bdev_nvme_set_hotplug", 00:35:43.995 "params": { 00:35:43.995 "period_us": 100000, 00:35:43.995 "enable": false 00:35:43.995 } 00:35:43.995 }, 00:35:43.995 { 00:35:43.995 "method": "bdev_wait_for_examine" 00:35:43.995 } 00:35:43.995 ] 00:35:43.995 }, 00:35:43.995 { 00:35:43.995 "subsystem": "nbd", 00:35:43.995 "config": [] 00:35:43.995 } 00:35:43.995 ] 00:35:43.995 }' 00:35:43.995 13:27:47 keyring_file -- keyring/file.sh@115 -- # killprocess 3124768 00:35:43.995 13:27:47 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3124768 ']' 00:35:43.995 13:27:47 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3124768 00:35:43.995 13:27:47 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:43.995 13:27:47 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:43.995 13:27:47 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3124768 00:35:43.995 13:27:47 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:43.995 13:27:47 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:43.995 13:27:47 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3124768' 00:35:43.995 killing process with pid 3124768 00:35:43.995 13:27:47 keyring_file -- common/autotest_common.sh@973 -- # kill 3124768 00:35:43.995 Received shutdown signal, test time was about 1.000000 seconds 00:35:43.995 00:35:43.995 Latency(us) 00:35:43.995 [2024-11-19T12:27:47.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:43.995 [2024-11-19T12:27:47.372Z] =================================================================================================================== 00:35:43.995 [2024-11-19T12:27:47.372Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:43.995 13:27:47 keyring_file -- common/autotest_common.sh@978 -- # wait 3124768 00:35:44.255 13:27:47 keyring_file -- keyring/file.sh@118 -- # bperfpid=3126300 00:35:44.255 13:27:47 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3126300 /var/tmp/bperf.sock 00:35:44.255 13:27:47 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3126300 ']' 00:35:44.255 13:27:47 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:44.255 13:27:47 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:44.255 13:27:47 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:44.255 13:27:47 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:44.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:44.255 13:27:47 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:44.255 "subsystems": [ 00:35:44.255 { 00:35:44.255 "subsystem": "keyring", 00:35:44.255 "config": [ 00:35:44.255 { 00:35:44.255 "method": "keyring_file_add_key", 00:35:44.255 "params": { 00:35:44.255 "name": "key0", 00:35:44.255 "path": "/tmp/tmp.1pkYHt0yOl" 00:35:44.255 } 00:35:44.255 }, 00:35:44.255 { 00:35:44.255 "method": "keyring_file_add_key", 00:35:44.255 "params": { 00:35:44.255 "name": "key1", 00:35:44.255 "path": "/tmp/tmp.c2FRLFNPkf" 00:35:44.255 } 00:35:44.255 } 00:35:44.255 ] 00:35:44.255 }, 00:35:44.255 { 00:35:44.255 "subsystem": "iobuf", 00:35:44.255 "config": [ 00:35:44.255 { 00:35:44.255 "method": "iobuf_set_options", 00:35:44.255 "params": { 00:35:44.255 "small_pool_count": 8192, 00:35:44.255 "large_pool_count": 1024, 00:35:44.255 "small_bufsize": 8192, 00:35:44.255 "large_bufsize": 135168, 00:35:44.255 "enable_numa": false 00:35:44.255 } 00:35:44.255 } 00:35:44.255 ] 00:35:44.255 }, 00:35:44.255 { 00:35:44.255 "subsystem": "sock", 00:35:44.255 "config": [ 00:35:44.255 { 00:35:44.255 "method": "sock_set_default_impl", 00:35:44.255 "params": { 00:35:44.255 "impl_name": "posix" 00:35:44.255 } 00:35:44.255 }, 00:35:44.255 { 00:35:44.255 "method": "sock_impl_set_options", 00:35:44.255 "params": { 00:35:44.255 "impl_name": "ssl", 00:35:44.255 "recv_buf_size": 4096, 00:35:44.255 "send_buf_size": 4096, 00:35:44.255 "enable_recv_pipe": true, 00:35:44.255 "enable_quickack": false, 00:35:44.255 "enable_placement_id": 0, 00:35:44.255 "enable_zerocopy_send_server": true, 00:35:44.255 "enable_zerocopy_send_client": false, 00:35:44.255 "zerocopy_threshold": 0, 00:35:44.255 "tls_version": 0, 00:35:44.255 "enable_ktls": false 00:35:44.255 } 00:35:44.255 }, 00:35:44.255 { 00:35:44.255 "method": "sock_impl_set_options", 00:35:44.255 "params": { 00:35:44.255 "impl_name": "posix", 00:35:44.255 "recv_buf_size": 2097152, 00:35:44.255 "send_buf_size": 2097152, 00:35:44.255 "enable_recv_pipe": true, 00:35:44.255 "enable_quickack": false, 00:35:44.255 "enable_placement_id": 0, 00:35:44.255 "enable_zerocopy_send_server": true, 00:35:44.255 "enable_zerocopy_send_client": false, 00:35:44.255 "zerocopy_threshold": 0, 00:35:44.255 "tls_version": 0, 00:35:44.255 "enable_ktls": false 00:35:44.255 } 00:35:44.255 } 00:35:44.255 ] 00:35:44.255 }, 00:35:44.255 { 00:35:44.255 "subsystem": "vmd", 00:35:44.255 "config": [] 00:35:44.255 }, 00:35:44.255 { 00:35:44.255 "subsystem": "accel", 00:35:44.255 "config": [ 00:35:44.255 { 00:35:44.255 "method": "accel_set_options", 00:35:44.255 "params": { 00:35:44.255 "small_cache_size": 128, 00:35:44.255 "large_cache_size": 16, 00:35:44.255 "task_count": 2048, 00:35:44.255 "sequence_count": 2048, 00:35:44.255 "buf_count": 2048 00:35:44.255 } 00:35:44.255 } 00:35:44.255 ] 00:35:44.255 }, 00:35:44.255 { 00:35:44.255 "subsystem": "bdev", 00:35:44.255 "config": [ 00:35:44.255 { 00:35:44.255 "method": "bdev_set_options", 00:35:44.255 "params": { 00:35:44.255 "bdev_io_pool_size": 65535, 00:35:44.255 "bdev_io_cache_size": 256, 00:35:44.255 "bdev_auto_examine": true, 00:35:44.255 "iobuf_small_cache_size": 128, 00:35:44.255 "iobuf_large_cache_size": 16 00:35:44.255 } 00:35:44.255 }, 00:35:44.255 { 00:35:44.255 "method": "bdev_raid_set_options", 00:35:44.255 "params": { 00:35:44.255 "process_window_size_kb": 1024, 00:35:44.255 "process_max_bandwidth_mb_sec": 0 00:35:44.255 } 00:35:44.255 }, 00:35:44.255 { 00:35:44.255 "method": "bdev_iscsi_set_options", 00:35:44.255 "params": { 00:35:44.255 "timeout_sec": 30 00:35:44.255 } 00:35:44.255 }, 00:35:44.255 { 00:35:44.255 "method": "bdev_nvme_set_options", 00:35:44.255 "params": { 00:35:44.255 "action_on_timeout": "none", 00:35:44.255 "timeout_us": 0, 00:35:44.255 "timeout_admin_us": 0, 00:35:44.255 "keep_alive_timeout_ms": 10000, 00:35:44.255 "arbitration_burst": 0, 00:35:44.255 "low_priority_weight": 0, 00:35:44.255 "medium_priority_weight": 0, 00:35:44.255 "high_priority_weight": 0, 00:35:44.255 "nvme_adminq_poll_period_us": 10000, 00:35:44.255 "nvme_ioq_poll_period_us": 0, 00:35:44.255 "io_queue_requests": 512, 00:35:44.255 "delay_cmd_submit": true, 00:35:44.255 "transport_retry_count": 4, 00:35:44.255 "bdev_retry_count": 3, 00:35:44.255 "transport_ack_timeout": 0, 00:35:44.255 "ctrlr_loss_timeout_sec": 0, 00:35:44.255 "reconnect_delay_sec": 0, 00:35:44.255 "fast_io_fail_timeout_sec": 0, 00:35:44.255 "disable_auto_failback": false, 00:35:44.255 "generate_uuids": false, 00:35:44.255 "transport_tos": 0, 00:35:44.255 "nvme_error_stat": false, 00:35:44.255 "rdma_srq_size": 0, 00:35:44.255 "io_path_stat": false, 00:35:44.255 "allow_accel_sequence": false, 00:35:44.255 "rdma_max_cq_size": 0, 00:35:44.256 "rdma_cm_event_timeout_ms": 0, 00:35:44.256 "dhchap_digests": [ 00:35:44.256 "sha256", 00:35:44.256 "sha384", 00:35:44.256 "sha512" 00:35:44.256 ], 00:35:44.256 "dhchap_dhgroups": [ 00:35:44.256 "null", 00:35:44.256 "ffdhe2048", 00:35:44.256 "ffdhe3072", 00:35:44.256 "ffdhe4096", 00:35:44.256 "ffdhe6144", 00:35:44.256 "ffdhe8192" 00:35:44.256 ] 00:35:44.256 } 00:35:44.256 }, 00:35:44.256 { 00:35:44.256 "method": "bdev_nvme_attach_controller", 00:35:44.256 "params": { 00:35:44.256 "name": "nvme0", 00:35:44.256 "trtype": "TCP", 00:35:44.256 "adrfam": "IPv4", 00:35:44.256 "traddr": "127.0.0.1", 00:35:44.256 "trsvcid": "4420", 00:35:44.256 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:44.256 "prchk_reftag": false, 00:35:44.256 "prchk_guard": false, 00:35:44.256 "ctrlr_loss_timeout_sec": 0, 00:35:44.256 "reconnect_delay_sec": 0, 00:35:44.256 "fast_io_fail_timeout_sec": 0, 00:35:44.256 "psk": "key0", 00:35:44.256 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:44.256 "hdgst": false, 00:35:44.256 "ddgst": false, 00:35:44.256 "multipath": "multipath" 00:35:44.256 } 00:35:44.256 }, 00:35:44.256 { 00:35:44.256 "method": "bdev_nvme_set_hotplug", 00:35:44.256 "params": { 00:35:44.256 "period_us": 100000, 00:35:44.256 "enable": false 00:35:44.256 } 00:35:44.256 }, 00:35:44.256 { 00:35:44.256 "method": "bdev_wait_for_examine" 00:35:44.256 } 00:35:44.256 ] 00:35:44.256 }, 00:35:44.256 { 00:35:44.256 "subsystem": "nbd", 00:35:44.256 "config": [] 00:35:44.256 } 00:35:44.256 ] 00:35:44.256 }' 00:35:44.256 13:27:47 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:44.256 13:27:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:44.256 [2024-11-19 13:27:47.425036] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:35:44.256 [2024-11-19 13:27:47.425084] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3126300 ] 00:35:44.256 [2024-11-19 13:27:47.498952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:44.256 [2024-11-19 13:27:47.541569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:44.515 [2024-11-19 13:27:47.701935] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:45.084 13:27:48 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:45.084 13:27:48 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:45.084 13:27:48 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:45.084 13:27:48 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:45.084 13:27:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.343 13:27:48 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:45.343 13:27:48 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:45.343 13:27:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:45.343 13:27:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:45.343 13:27:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:45.343 13:27:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:45.343 13:27:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.343 13:27:48 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:45.343 13:27:48 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:45.343 13:27:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:45.343 13:27:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:45.343 13:27:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:45.343 13:27:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.343 13:27:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:45.602 13:27:48 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:45.602 13:27:48 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:45.602 13:27:48 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:45.602 13:27:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:45.861 13:27:49 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:45.861 13:27:49 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:45.861 13:27:49 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.1pkYHt0yOl /tmp/tmp.c2FRLFNPkf 00:35:45.861 13:27:49 keyring_file -- keyring/file.sh@20 -- # killprocess 3126300 00:35:45.861 13:27:49 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3126300 ']' 00:35:45.861 13:27:49 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3126300 00:35:45.861 13:27:49 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:45.861 13:27:49 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:45.861 13:27:49 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3126300 00:35:45.861 13:27:49 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:45.861 13:27:49 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:45.861 13:27:49 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3126300' 00:35:45.861 killing process with pid 3126300 00:35:45.861 13:27:49 keyring_file -- common/autotest_common.sh@973 -- # kill 3126300 00:35:45.861 Received shutdown signal, test time was about 1.000000 seconds 00:35:45.861 00:35:45.861 Latency(us) 00:35:45.861 [2024-11-19T12:27:49.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:45.861 [2024-11-19T12:27:49.238Z] =================================================================================================================== 00:35:45.861 [2024-11-19T12:27:49.238Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:45.861 13:27:49 keyring_file -- common/autotest_common.sh@978 -- # wait 3126300 00:35:46.119 13:27:49 keyring_file -- keyring/file.sh@21 -- # killprocess 3124758 00:35:46.119 13:27:49 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3124758 ']' 00:35:46.119 13:27:49 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3124758 00:35:46.119 13:27:49 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:46.119 13:27:49 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:46.119 13:27:49 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3124758 00:35:46.119 13:27:49 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:46.119 13:27:49 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:46.119 13:27:49 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3124758' 00:35:46.119 killing process with pid 3124758 00:35:46.119 13:27:49 keyring_file -- common/autotest_common.sh@973 -- # kill 3124758 00:35:46.119 13:27:49 keyring_file -- common/autotest_common.sh@978 -- # wait 3124758 00:35:46.378 00:35:46.378 real 0m11.989s 00:35:46.378 user 0m29.851s 00:35:46.378 sys 0m2.735s 00:35:46.378 13:27:49 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:46.378 13:27:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:46.378 ************************************ 00:35:46.378 END TEST keyring_file 00:35:46.378 ************************************ 00:35:46.378 13:27:49 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:46.378 13:27:49 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:46.378 13:27:49 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:46.378 13:27:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:46.378 13:27:49 -- common/autotest_common.sh@10 -- # set +x 00:35:46.378 ************************************ 00:35:46.378 START TEST keyring_linux 00:35:46.378 ************************************ 00:35:46.378 13:27:49 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:46.378 Joined session keyring: 341864358 00:35:46.639 * Looking for test storage... 00:35:46.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:46.639 13:27:49 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:46.639 13:27:49 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:35:46.639 13:27:49 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:46.639 13:27:49 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:46.639 13:27:49 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:46.639 13:27:49 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:46.639 13:27:49 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:46.639 13:27:49 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:46.639 13:27:49 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:46.639 13:27:49 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:46.639 13:27:49 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:46.639 13:27:49 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:46.639 13:27:49 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:46.639 13:27:49 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:46.640 13:27:49 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:46.640 13:27:49 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:46.640 13:27:49 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:46.640 13:27:49 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:46.640 13:27:49 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:46.640 13:27:49 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:46.640 13:27:49 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:46.640 13:27:49 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:46.640 13:27:49 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:46.640 13:27:49 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:46.640 13:27:49 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:46.640 13:27:49 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:46.640 13:27:49 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:46.640 13:27:49 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:46.640 13:27:49 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:46.640 13:27:49 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:46.640 13:27:49 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:46.640 13:27:49 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:46.640 13:27:49 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:46.640 13:27:49 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:46.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.640 --rc genhtml_branch_coverage=1 00:35:46.640 --rc genhtml_function_coverage=1 00:35:46.640 --rc genhtml_legend=1 00:35:46.640 --rc geninfo_all_blocks=1 00:35:46.640 --rc geninfo_unexecuted_blocks=1 00:35:46.640 00:35:46.640 ' 00:35:46.640 13:27:49 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:46.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.640 --rc genhtml_branch_coverage=1 00:35:46.640 --rc genhtml_function_coverage=1 00:35:46.640 --rc genhtml_legend=1 00:35:46.640 --rc geninfo_all_blocks=1 00:35:46.640 --rc geninfo_unexecuted_blocks=1 00:35:46.640 00:35:46.640 ' 00:35:46.640 13:27:49 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:46.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.640 --rc genhtml_branch_coverage=1 00:35:46.640 --rc genhtml_function_coverage=1 00:35:46.640 --rc genhtml_legend=1 00:35:46.640 --rc geninfo_all_blocks=1 00:35:46.640 --rc geninfo_unexecuted_blocks=1 00:35:46.640 00:35:46.640 ' 00:35:46.640 13:27:49 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:46.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.640 --rc genhtml_branch_coverage=1 00:35:46.640 --rc genhtml_function_coverage=1 00:35:46.640 --rc genhtml_legend=1 00:35:46.640 --rc geninfo_all_blocks=1 00:35:46.640 --rc geninfo_unexecuted_blocks=1 00:35:46.640 00:35:46.640 ' 00:35:46.640 13:27:49 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:46.640 13:27:49 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:46.640 13:27:49 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:46.640 13:27:49 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:46.640 13:27:49 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:46.640 13:27:49 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:46.640 13:27:49 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.640 13:27:49 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.640 13:27:49 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.640 13:27:49 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:46.640 13:27:49 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:46.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:46.640 13:27:49 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:46.640 13:27:49 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:46.640 13:27:49 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:46.640 13:27:49 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:46.640 13:27:49 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:46.640 13:27:49 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:46.640 13:27:49 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:46.641 13:27:49 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:46.641 13:27:49 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:46.641 13:27:49 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:46.641 13:27:49 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:46.641 13:27:49 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:46.641 13:27:49 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:46.641 13:27:49 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:46.641 13:27:49 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:46.641 13:27:49 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:46.641 13:27:49 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:46.641 13:27:49 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:46.641 13:27:49 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:46.641 13:27:49 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:46.641 13:27:49 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:46.641 13:27:49 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:46.641 /tmp/:spdk-test:key0 00:35:46.641 13:27:49 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:46.641 13:27:49 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:46.641 13:27:49 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:46.641 13:27:49 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:46.641 13:27:49 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:46.641 13:27:49 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:46.641 13:27:49 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:46.641 13:27:49 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:46.641 13:27:49 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:46.641 13:27:49 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:46.641 13:27:49 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:46.641 13:27:49 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:46.641 13:27:49 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:46.641 13:27:50 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:46.641 13:27:50 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:46.641 /tmp/:spdk-test:key1 00:35:46.641 13:27:50 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3126846 00:35:46.641 13:27:50 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3126846 00:35:46.641 13:27:50 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:46.641 13:27:50 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3126846 ']' 00:35:46.641 13:27:50 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:46.641 13:27:50 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:46.641 13:27:50 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:46.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:46.641 13:27:50 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:46.641 13:27:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:46.901 [2024-11-19 13:27:50.059036] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:35:46.901 [2024-11-19 13:27:50.059089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3126846 ] 00:35:46.901 [2024-11-19 13:27:50.135131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:46.901 [2024-11-19 13:27:50.176128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:47.160 13:27:50 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:47.160 13:27:50 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:47.160 13:27:50 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:47.160 13:27:50 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.160 13:27:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:47.160 [2024-11-19 13:27:50.409082] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:47.160 null0 00:35:47.160 [2024-11-19 13:27:50.441133] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:47.160 [2024-11-19 13:27:50.441510] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:47.160 13:27:50 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.160 13:27:50 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:47.160 33000295 00:35:47.160 13:27:50 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:47.160 862854260 00:35:47.160 13:27:50 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3126940 00:35:47.160 13:27:50 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:47.160 13:27:50 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3126940 /var/tmp/bperf.sock 00:35:47.160 13:27:50 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3126940 ']' 00:35:47.160 13:27:50 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:47.160 13:27:50 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:47.160 13:27:50 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:47.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:47.160 13:27:50 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:47.160 13:27:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:47.160 [2024-11-19 13:27:50.515752] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:35:47.160 [2024-11-19 13:27:50.515797] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3126940 ] 00:35:47.420 [2024-11-19 13:27:50.591341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:47.420 [2024-11-19 13:27:50.634283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:47.420 13:27:50 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:47.420 13:27:50 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:47.420 13:27:50 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:47.420 13:27:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:47.679 13:27:50 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:47.679 13:27:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:47.939 13:27:51 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:47.939 13:27:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:47.939 [2024-11-19 13:27:51.287192] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:48.199 nvme0n1 00:35:48.199 13:27:51 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:48.199 13:27:51 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:48.199 13:27:51 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:48.199 13:27:51 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:48.199 13:27:51 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:48.199 13:27:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:48.458 13:27:51 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:48.458 13:27:51 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:48.458 13:27:51 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:48.458 13:27:51 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:48.458 13:27:51 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:48.458 13:27:51 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:48.458 13:27:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:48.458 13:27:51 keyring_linux -- keyring/linux.sh@25 -- # sn=33000295 00:35:48.458 13:27:51 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:48.458 13:27:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:48.458 13:27:51 keyring_linux -- keyring/linux.sh@26 -- # [[ 33000295 == \3\3\0\0\0\2\9\5 ]] 00:35:48.458 13:27:51 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 33000295 00:35:48.458 13:27:51 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:48.458 13:27:51 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:48.718 Running I/O for 1 seconds... 00:35:49.656 21195.00 IOPS, 82.79 MiB/s 00:35:49.656 Latency(us) 00:35:49.656 [2024-11-19T12:27:53.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:49.656 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:49.656 nvme0n1 : 1.01 21194.50 82.79 0.00 0.00 6019.09 4786.98 10998.65 00:35:49.656 [2024-11-19T12:27:53.033Z] =================================================================================================================== 00:35:49.656 [2024-11-19T12:27:53.033Z] Total : 21194.50 82.79 0.00 0.00 6019.09 4786.98 10998.65 00:35:49.656 { 00:35:49.656 "results": [ 00:35:49.656 { 00:35:49.656 "job": "nvme0n1", 00:35:49.656 "core_mask": "0x2", 00:35:49.656 "workload": "randread", 00:35:49.656 "status": "finished", 00:35:49.656 "queue_depth": 128, 00:35:49.656 "io_size": 4096, 00:35:49.656 "runtime": 1.00611, 00:35:49.656 "iops": 21194.501595253005, 00:35:49.656 "mibps": 82.79102185645705, 00:35:49.656 "io_failed": 0, 00:35:49.656 "io_timeout": 0, 00:35:49.656 "avg_latency_us": 6019.087917920611, 00:35:49.656 "min_latency_us": 4786.977391304348, 00:35:49.656 "max_latency_us": 10998.650434782608 00:35:49.656 } 00:35:49.656 ], 00:35:49.656 "core_count": 1 00:35:49.656 } 00:35:49.656 13:27:52 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:49.656 13:27:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:49.915 13:27:53 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:49.915 13:27:53 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:49.915 13:27:53 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:49.915 13:27:53 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:49.915 13:27:53 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:49.915 13:27:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:50.175 13:27:53 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:50.175 13:27:53 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:50.175 13:27:53 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:50.175 13:27:53 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:50.175 13:27:53 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:50.175 13:27:53 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:50.175 13:27:53 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:50.175 13:27:53 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:50.175 13:27:53 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:50.175 13:27:53 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:50.175 13:27:53 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:50.175 13:27:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:50.175 [2024-11-19 13:27:53.511139] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:50.175 [2024-11-19 13:27:53.511851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1142a70 (107): Transport endpoint is not connected 00:35:50.175 [2024-11-19 13:27:53.512846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1142a70 (9): Bad file descriptor 00:35:50.175 [2024-11-19 13:27:53.513847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:50.175 [2024-11-19 13:27:53.513863] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:50.175 [2024-11-19 13:27:53.513870] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:50.175 [2024-11-19 13:27:53.513878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:50.175 request: 00:35:50.175 { 00:35:50.175 "name": "nvme0", 00:35:50.175 "trtype": "tcp", 00:35:50.175 "traddr": "127.0.0.1", 00:35:50.175 "adrfam": "ipv4", 00:35:50.175 "trsvcid": "4420", 00:35:50.175 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:50.175 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:50.175 "prchk_reftag": false, 00:35:50.175 "prchk_guard": false, 00:35:50.175 "hdgst": false, 00:35:50.175 "ddgst": false, 00:35:50.175 "psk": ":spdk-test:key1", 00:35:50.175 "allow_unrecognized_csi": false, 00:35:50.175 "method": "bdev_nvme_attach_controller", 00:35:50.175 "req_id": 1 00:35:50.175 } 00:35:50.175 Got JSON-RPC error response 00:35:50.175 response: 00:35:50.175 { 00:35:50.175 "code": -5, 00:35:50.175 "message": "Input/output error" 00:35:50.175 } 00:35:50.175 13:27:53 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:50.175 13:27:53 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:50.175 13:27:53 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:50.175 13:27:53 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:50.175 13:27:53 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:50.175 13:27:53 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:50.175 13:27:53 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:50.175 13:27:53 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:50.175 13:27:53 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:50.175 13:27:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:50.175 13:27:53 keyring_linux -- keyring/linux.sh@33 -- # sn=33000295 00:35:50.175 13:27:53 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 33000295 00:35:50.175 1 links removed 00:35:50.175 13:27:53 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:50.175 13:27:53 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:50.175 13:27:53 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:50.175 13:27:53 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:50.175 13:27:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:50.175 13:27:53 keyring_linux -- keyring/linux.sh@33 -- # sn=862854260 00:35:50.175 13:27:53 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 862854260 00:35:50.175 1 links removed 00:35:50.175 13:27:53 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3126940 00:35:50.175 13:27:53 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3126940 ']' 00:35:50.175 13:27:53 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3126940 00:35:50.434 13:27:53 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:50.434 13:27:53 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:50.434 13:27:53 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3126940 00:35:50.434 13:27:53 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:50.434 13:27:53 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:50.434 13:27:53 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3126940' 00:35:50.434 killing process with pid 3126940 00:35:50.434 13:27:53 keyring_linux -- common/autotest_common.sh@973 -- # kill 3126940 00:35:50.434 Received shutdown signal, test time was about 1.000000 seconds 00:35:50.434 00:35:50.434 Latency(us) 00:35:50.434 [2024-11-19T12:27:53.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:50.434 [2024-11-19T12:27:53.811Z] =================================================================================================================== 00:35:50.434 [2024-11-19T12:27:53.811Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:50.435 13:27:53 keyring_linux -- common/autotest_common.sh@978 -- # wait 3126940 00:35:50.435 13:27:53 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3126846 00:35:50.435 13:27:53 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3126846 ']' 00:35:50.435 13:27:53 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3126846 00:35:50.435 13:27:53 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:50.435 13:27:53 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:50.435 13:27:53 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3126846 00:35:50.435 13:27:53 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:50.435 13:27:53 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:50.435 13:27:53 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3126846' 00:35:50.435 killing process with pid 3126846 00:35:50.435 13:27:53 keyring_linux -- common/autotest_common.sh@973 -- # kill 3126846 00:35:50.435 13:27:53 keyring_linux -- common/autotest_common.sh@978 -- # wait 3126846 00:35:51.003 00:35:51.003 real 0m4.395s 00:35:51.003 user 0m8.291s 00:35:51.003 sys 0m1.459s 00:35:51.003 13:27:54 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:51.003 13:27:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:51.003 ************************************ 00:35:51.003 END TEST keyring_linux 00:35:51.003 ************************************ 00:35:51.003 13:27:54 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:51.003 13:27:54 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:51.003 13:27:54 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:51.003 13:27:54 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:51.003 13:27:54 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:51.003 13:27:54 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:51.003 13:27:54 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:51.003 13:27:54 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:51.003 13:27:54 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:51.003 13:27:54 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:51.003 13:27:54 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:51.003 13:27:54 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:51.003 13:27:54 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:51.003 13:27:54 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:51.003 13:27:54 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:51.003 13:27:54 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:51.003 13:27:54 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:51.003 13:27:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:51.003 13:27:54 -- common/autotest_common.sh@10 -- # set +x 00:35:51.003 13:27:54 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:51.003 13:27:54 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:51.003 13:27:54 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:51.003 13:27:54 -- common/autotest_common.sh@10 -- # set +x 00:35:56.279 INFO: APP EXITING 00:35:56.279 INFO: killing all VMs 00:35:56.279 INFO: killing vhost app 00:35:56.279 INFO: EXIT DONE 00:35:58.818 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:58.818 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:58.818 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:58.818 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:58.818 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:58.818 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:58.818 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:58.818 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:58.818 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:58.818 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:58.818 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:58.818 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:58.818 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:58.818 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:58.818 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:58.818 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:58.818 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:02.111 Cleaning 00:36:02.111 Removing: /var/run/dpdk/spdk0/config 00:36:02.111 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:02.111 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:02.111 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:02.111 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:02.111 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:02.111 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:02.111 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:02.111 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:02.111 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:02.111 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:02.111 Removing: /var/run/dpdk/spdk1/config 00:36:02.111 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:02.111 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:02.111 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:02.111 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:02.111 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:02.111 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:02.111 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:02.111 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:02.111 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:02.111 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:02.111 Removing: /var/run/dpdk/spdk2/config 00:36:02.111 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:02.111 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:02.111 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:02.111 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:02.111 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:02.111 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:02.111 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:02.111 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:02.111 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:02.111 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:02.111 Removing: /var/run/dpdk/spdk3/config 00:36:02.111 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:02.111 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:02.111 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:02.111 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:02.111 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:02.111 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:02.111 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:02.111 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:02.111 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:02.111 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:02.111 Removing: /var/run/dpdk/spdk4/config 00:36:02.111 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:02.111 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:02.111 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:02.111 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:02.111 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:02.111 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:02.111 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:02.111 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:02.111 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:02.111 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:02.111 Removing: /dev/shm/bdev_svc_trace.1 00:36:02.111 Removing: /dev/shm/nvmf_trace.0 00:36:02.111 Removing: /dev/shm/spdk_tgt_trace.pid2646687 00:36:02.111 Removing: /var/run/dpdk/spdk0 00:36:02.111 Removing: /var/run/dpdk/spdk1 00:36:02.111 Removing: /var/run/dpdk/spdk2 00:36:02.111 Removing: /var/run/dpdk/spdk3 00:36:02.111 Removing: /var/run/dpdk/spdk4 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2644551 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2645607 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2646687 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2647327 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2648271 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2648356 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2649380 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2649493 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2649830 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2651361 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2652644 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2652962 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2653229 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2653533 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2653823 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2654084 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2654330 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2654614 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2655355 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2658896 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2659152 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2659406 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2659456 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2659911 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2660065 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2660428 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2660626 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2660895 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2660903 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2661159 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2661172 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2661734 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2661983 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2662290 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2666152 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2670487 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2680750 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2681446 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2685790 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2686134 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2690426 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2696316 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2698961 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2709898 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2718825 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2720661 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2721591 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2738490 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2742582 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2788316 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2793714 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2799476 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2806091 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2806094 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2807405 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2808317 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2809228 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2809694 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2809702 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2809947 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2810163 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2810165 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2811076 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2811965 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2812732 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2813377 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2813382 00:36:02.111 Removing: /var/run/dpdk/spdk_pid2813610 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2814648 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2815733 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2823948 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2853294 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2857773 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2859400 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2861228 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2861252 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2861483 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2861617 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2862165 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2864069 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2864836 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2865332 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2867433 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2867930 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2868643 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2872776 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2878447 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2878448 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2878449 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2882624 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2890974 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2895008 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2901062 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2902303 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2903717 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2905166 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2909878 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2914203 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2918233 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2925682 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2925798 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2930364 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2930680 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2930904 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2931536 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2931647 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2936244 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2936810 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2941159 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2943766 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2949089 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2954638 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2963432 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2970389 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2970426 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2989736 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2990218 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2990868 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2991374 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2992121 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2992593 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2993138 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2993756 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2997798 00:36:02.371 Removing: /var/run/dpdk/spdk_pid2998061 00:36:02.371 Removing: /var/run/dpdk/spdk_pid3004108 00:36:02.371 Removing: /var/run/dpdk/spdk_pid3004375 00:36:02.371 Removing: /var/run/dpdk/spdk_pid3009636 00:36:02.371 Removing: /var/run/dpdk/spdk_pid3013879 00:36:02.371 Removing: /var/run/dpdk/spdk_pid3023762 00:36:02.371 Removing: /var/run/dpdk/spdk_pid3024839 00:36:02.371 Removing: /var/run/dpdk/spdk_pid3029095 00:36:02.371 Removing: /var/run/dpdk/spdk_pid3029344 00:36:02.371 Removing: /var/run/dpdk/spdk_pid3033596 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3039273 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3041841 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3052004 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3060672 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3062403 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3063245 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3079840 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3083651 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3086344 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3094080 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3094090 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3099164 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3101084 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3103046 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3104246 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3106282 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3107342 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3116214 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3116706 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3117532 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3119873 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3120427 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3120940 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3124758 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3124768 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3126300 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3126846 00:36:02.630 Removing: /var/run/dpdk/spdk_pid3126940 00:36:02.630 Clean 00:36:02.630 13:28:05 -- common/autotest_common.sh@1453 -- # return 0 00:36:02.630 13:28:05 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:02.630 13:28:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:02.630 13:28:05 -- common/autotest_common.sh@10 -- # set +x 00:36:02.630 13:28:05 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:02.630 13:28:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:02.630 13:28:05 -- common/autotest_common.sh@10 -- # set +x 00:36:02.630 13:28:06 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:02.889 13:28:06 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:02.889 13:28:06 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:02.889 13:28:06 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:02.889 13:28:06 -- spdk/autotest.sh@398 -- # hostname 00:36:02.889 13:28:06 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:02.889 geninfo: WARNING: invalid characters removed from testname! 00:36:24.827 13:28:27 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:26.732 13:28:29 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:28.639 13:28:31 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:30.547 13:28:33 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:32.453 13:28:35 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:34.425 13:28:37 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:36.353 13:28:39 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:36.353 13:28:39 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:36.353 13:28:39 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:36.353 13:28:39 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:36.353 13:28:39 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:36.353 13:28:39 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:36.353 + [[ -n 2567283 ]] 00:36:36.353 + sudo kill 2567283 00:36:36.363 [Pipeline] } 00:36:36.379 [Pipeline] // stage 00:36:36.385 [Pipeline] } 00:36:36.400 [Pipeline] // timeout 00:36:36.405 [Pipeline] } 00:36:36.420 [Pipeline] // catchError 00:36:36.426 [Pipeline] } 00:36:36.442 [Pipeline] // wrap 00:36:36.449 [Pipeline] } 00:36:36.463 [Pipeline] // catchError 00:36:36.472 [Pipeline] stage 00:36:36.475 [Pipeline] { (Epilogue) 00:36:36.488 [Pipeline] catchError 00:36:36.490 [Pipeline] { 00:36:36.502 [Pipeline] echo 00:36:36.504 Cleanup processes 00:36:36.510 [Pipeline] sh 00:36:36.797 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:36.797 3137557 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:36.811 [Pipeline] sh 00:36:37.098 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:37.098 ++ grep -v 'sudo pgrep' 00:36:37.098 ++ awk '{print $1}' 00:36:37.098 + sudo kill -9 00:36:37.098 + true 00:36:37.110 [Pipeline] sh 00:36:37.396 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:49.622 [Pipeline] sh 00:36:49.907 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:49.908 Artifacts sizes are good 00:36:49.922 [Pipeline] archiveArtifacts 00:36:49.929 Archiving artifacts 00:36:50.045 [Pipeline] sh 00:36:50.331 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:50.346 [Pipeline] cleanWs 00:36:50.355 [WS-CLEANUP] Deleting project workspace... 00:36:50.355 [WS-CLEANUP] Deferred wipeout is used... 00:36:50.362 [WS-CLEANUP] done 00:36:50.363 [Pipeline] } 00:36:50.378 [Pipeline] // catchError 00:36:50.390 [Pipeline] sh 00:36:50.674 + logger -p user.info -t JENKINS-CI 00:36:50.683 [Pipeline] } 00:36:50.696 [Pipeline] // stage 00:36:50.702 [Pipeline] } 00:36:50.716 [Pipeline] // node 00:36:50.721 [Pipeline] End of Pipeline 00:36:50.760 Finished: SUCCESS